QKD – How Quantum Cryptography Key Distribution Works

Forwarded from: https://howdoesinternetwork.com/2016/quantum-key-distribution


QKD – Quantum key distribution is the magic part of quantum cryptography. Every other part of this new cryptography mechanism remains the same as in standard cryptography techniques currently used.

By using quantum particles which behave under rules of quantum mechanics, keys can be generated and distributed to receiver side in completely safe way. Quantum mechanics principle, which describes the base rule protecting the exchange of keys, is Heisenberg’s Uncertainty Principle.

Heisenberg’s Uncertainty Principle states that it is impossible to measure both speed and current position of quantum particles at the same time. It furthermore states that the state of observed particle will change if and when measured. This fairly negative axiom which says that measurement couldn’t be done without perturbing the system is used in positive way by quantum key distribution.

It a real communication system, if somebody tries to intercept photon-powered communication so that it can get the crypto key which is being generated by this photon transfer, it will need to squeeze transferred photons through its polarization filter to read information encoded on them. As soon as it tries with wrong filter it will send forward the wrong photon. Sender and receiver will notice the disparity in exchanged data and interpret it as detection of interception. They will then restart the process of new crypto key generation.

Photon, and how it is used?

1) Photon – Smallest particle of light is a photon. It has three types of spins: horizontal, vertical and diagonal which can be imagined as right to left polarization.

2) Polarization – Polarization is used to polarize a photon. Polarize the photon means to filter the particle through polarization filter in order to filter out unwanted types of spins. Photon has all three spin states at the same time. We can manipulate the spin of a photon by putting the filter on its path. Photon, when passed through the polarization filter, has particular spin that filter lets through.

3) Spin – The Spin is usually the most complicated property to describe. It is a property of some elementary particle like electron and photon. When they move through a magnetic field, they will be deflected like they have same properties of little magnets.

If we take classical world for example, a charged, spinning object has magnetic properties. Elementary particles like photons or electrons have similar properties. We know that by the rules of quantum mechanics that elementary particles cannot spin. Regardless the inability to spin, physicists named the elementary particle magnetic properties “spin”. It can be a bit misleading but it helps to learn the fact that photon will be deflected by magnetic field. The photon’s spin does not change and it can be manifested in two possible orientations.

4) LED – light emitting diodes are used to create photons in most quantum-optics experiments. LEDs are creating unpolarized (real-world) light.

Modern technology advanced and today it is possible to use LEDs as source of single photon. In this way string of photons is created which will then be used in quantum channel for key generation and distribution in quantum key distribution process between sender and receiver.

Normal optic networking devices use LED light sources which are creating photon bursts instead of individual photons. In quantum cryptography one single photon at a time needs to be sent in order to have the chance to polarize it on the entrance into optic channel and check the polarization on the exit side.

Data Transmission Using Photons

Most technically challenging part of data transmission encoded in individual photon is the technique to read the encoded bit of data out from each photon. How’s possible to read the bit encoded in the photon when the very essence of quantum physics is making the measurement of quantum state impossible without perturbations? There is an exception.

We attach one bit of data to each photon by polarizing each individual photon. Polarizing photons is done by filtering photon through polarization filter. Polarized photon is send across quantum channel towards receiver on other side.

Heisenberg’s Uncertainty Principle come into the experiment with the rule that photon, when polarized, cannot be measured again because the measurement will change its state (ratio between different spins).

Fortunately, there is an exception in Uncertainty Principle which enables the measurement but only in special cases when measurement of the photon spin properties is done with a device (filter in this case) whose quantum state is compatible with measured particle.

In a case when photons vertical spin is measured with diagonal filter, photon will be absorbed by the filter or the filter will change photon’s spin properties. By changing the properties photon will pass through the filter but it will get diagonal spin. In both cases information which was sent from sender is lost on receiver side.

The only way to read photons currently encoded bit/spin is to pass it through the right kind of filter. If polarized with diagonal polarization (X) the only way to read this spin is to pass the photon through diagonal (X) filter. If vertical filter (+) is used in an attempt to read that photon polarization, photon will get absorbed or it will change the spin and get different polarization as it did on the source side.

List of spin that we can produce when different polarization filter is used:

  •   Linear Polarization (+)
  •   Horizontal Spin (–)
  •   Vertical Spin (|)
  •   Diagonal Polarization (X)
  •   Diagonal Spin to the left (\)
  •   Diagonal Spin to the right (/)

Key Generation or Key Distribution

The technique of data transmission using photons in order to generate a secure key at quantum level is usually referred as Quantum Key Distribution process. Sometimes QKD is also wrongly referenced as Quantum Cryptography. QKD is only a part of Quantum Crypto.

Key Distribution/Generation using photon properties like spin is solved by Quantum Key Distribution protocols allowing the exchange of a crypto key with – laws of physics guaranteed – security. When finally generated, key is absolutely secure and can be further used with all sorts of conventional crypto algorithms.

Quantum Key Distribution protocols that are commonly mentioned and mostly in use in today’s implementations are BB84 protocol and SARG protocol.

BB84 is the first one invented and it is still commonly used. It is the first one to be described in the papers like this one which are trying to describe how Quantum key exchange works. SARG was created later as an enhancement which brought a different way of key sifting technique which is described later in this paper.

1) Attaching Information bit on the photon – Key Exchange

Key Exchange phase, sometimes referred as Raw Key Exchange giving the anticipation of future need for Key Sifting is a technique equal for both listed Quantum Key Distribution protocols BB84 and SARG. To be able to transfer numeric (binary) information across quantum channel we need to apply specific encoding to different photon states. For Example, encoding will be applied as in the Table 1 below making different photon spin carry different binary value.


Table 1 – QKD – Encoding of photon states

In the process of key distribution, first step is for sender to apply polarization on sent photons and take a note of applied polarization. For this to be an example, we will take the Table 2 below as the list of sent photons with their polarization information listed.

Table 2 – QKD – Encoded photons

Sender sent binary data:

0 1 0 0 0 0 0 1 0 0 1 1 1 0 0 1 0 0 1 1

If the system will work with integers this data can be formatted in integer format:

Table 3- Binary to Decimal Conversion Table

Sender sent a key 267155 but it is just the start of the key generation process in which this key will be transformed from firstly sent group of bits ( 0 1 0 0 0 0 0 1 0 0 1 1 1 0 0 1 0 0 1 1 ) to the real generated and secured key.

2) Reading Information bits on the receiver side

The question arises on how can we use above described properties of photon and still be able to actually read it on the receiver side. In the above step, photons with the information attached to them were sent to the receiver side.

The next step will describe how quantum key distribution, and with that the whole quantum cryptography, works.

While sending, a list is made, list containing each sent photon, sent from sender to receiver and polarized with specific spin (encoded a bit of information on each photon).

In optimal case, when sender sends a photon with vertical spin and receiver also applies vertical filter in the time of photon arrival, they will successfully transfer a bit of data using quantum particle (photon). In a less optimal case when a photon with vertical spin is measured with diagonal filter the outcome will be photon with diagonal spin or no photon at all. The latter will happen if the photon is absorbed by the filter. In this case, transferred bit of data will later get dumped in the phase of key sifting or key verification.

3) Key Verification – Sifting Key Process

Key sifting phase or Key verification is a technique made differently with listed Quantum Key Distribution protocols BB84 and SARG. In the last section, a less optimal case when a photon with vertical spin is measured with diagonal filter was described. The outcome of that photon, which is sent with vertical spin, measurement done with diagonal spin, will give to the receiver a photon with diagonal spin or no photon at all.

Key verification comes into play now and it is usually referred as Key Sifting process.

In BB84 protocol receiver will communicate with sender and give him the list of applied filters for every received photon. Sender will analyze that list and respond with a shorter list back. That list is made by leaving out the instances where sender and receiver used different filters in single photon transfer.

In SARG protocol receiver will give to sender the list of results he produced from each received photons without sending filter orientation used (difference from BB84). Sender then needs to use that list plus his applied polarization while sending to deduce the orientation of the filter used by receiver. Sender then unveils to the receiver for which transfers he is able to deduce the polarization. Sender and receiver will discard all other cases.

In this whole process, sending of polarized photons is done through special line of optical fiber cable.

If we take BB84 for example, Key sifting process is done by receiver sending to the sender only the list of applied polarization in each photon transfer. Receiver does not send the spin or the value he got as a result from that transfer. Having that in mind, it is clear that communication channel for key verification must not be a quantum channel but rather a normal communication channel with not even the need to have encryption applied. Receiver and sender are exchanging the data that is only locally significant to their process of deducing in which steps they succeeded to send one polarized photon and read the photon one bit of information on the other side.

In the end of Key Sifting process, taking that no eavesdropping happened, both sides will be in possession of exactly the same cryptographic key. The key after sifting process will be half of the original raw key length when BB84 is used or a quarter with SARG. Other bits will be discarded in the sifting process.

Communication Interception – Key Distillation

1) Interception Detection

If a malicious third party wants to intercept the communication between two sides, in order to read the information encoded, he will have to randomly apply polarization on transmitted photons. If polarization is done, this third party will need to forward photons to the original sender. As it is not possible to guess all polarization correctly, when sender and receiver validate the polarization, receiver will not be able to decrypt data, interception of communication is detected.

On average, eavesdropper which is trying to intercept photons will use wrong filter polarization in half of the cases. By doing this, state of those photons will be changed making errors in the raw key exchange by the emitter and receiver.

It is basically the same thing which happens if receiver uses wrong filter while trying to read photon polarization or when the same wrong filter is used by an eavesdropper.

In both cases, to prove the integrity of the key, it is enough that sender and receiver are checking for the errors in the sequence or raw key exchange.

Some other thing can cause raw key exchange errors, not only eavesdropping. Hardware component issues and imperfections, environmental causes to the quantum channel can also cause photon loss or polarization change. All those errors are categorized as a possible eavesdropper detection and are filtered out in key sifting. To be sure how much information eavesdropper could have gathered in the process, key distillation is used.

2) Key Distillation

When we have a sifted key, to remove errors and information that an eavesdropper could have gained, sifted key must be processed again. The key after key distillation will be secured enough to be used as secret key.

For example, for all the photons, for which eavesdropper used right polarization filter and for which receiver also used right polarization filter, we do not have a detected communication interception. Here Key Distillation comes into play.

First out of two steps is to correct all possible errors in the key which is done using a classical error correction protocol. In this step we will have an output of error rate which happened. This error rate estimate we can calculate the amount of information the eavesdropper could have about the key.

Second step is privacy amplification which will use compression on the key to squeeze the information of the eavesdropper. The compression factor depends proportionately on the error rate.

Why you shouldn’t ‘be yourself’ at work

‘Be yourself’ is the defining careers advice of the moment. It’s heard everywhere from business leaders in the boardroom to graduation day speeches. It’s so common it’s even a hiring tool for some companies.

One person striving to successfully heed this advice is Michael Friedrich, the Berlin-based vice-president of ScribbleLive, a Canadian software company. For Friedrich, being himself involves wearing shorts to work, and telling prospective clients he’s sleeping on a friend’s living-room floor while he finds a home of his own.

Playing by his own rules has worked well so far, Friedrich says. Thanks to the foreign languages, and well-honed intercultural skills picked up while travelling instead of going to university, he’s landed well-paying jobs. And, despite his unconventional behaviour at ScribbleLive, he’s won a major promotion.

(Credit: ScribbleLive London)

Michael Friedrich bids farewell to his London colleagues before embarking on an 800-mile bicycle ride to Berlin, Germany (Credit: ScribbleLive London)

“I don’t worry about image in the traditional sense. I am the way I am,” says the 44-year-old. “I accept what I’m like and I celebrate it.”

But is ‘be yourself’ good advice for everyone? Just how much of yourself should you reveal to your colleagues? And, are some of us more suited to this ethos than others?

Blurred boundaries 

‘Being yourself’ can backfire in certain circumstances, says Professor Herminia Ibarra, an expert in organisational behaviour and leadership at London Business School and Insead in France.

For instance, her research suggests that people who have been promoted are at risk of failing in their new role if they have a fixed idea of their own ‘authentic’ personality. Rather than adapting their behaviour to fit their changed status, they carry on exactly as before. For instance, someone who sees themselves as open and friendly may share too much of their thoughts and feelings, thus losing credibility and effectiveness, she explains.

(Credit: Benedict Johnson)

Just been promoted to manager? Professor Herminia Ibarra says it’s not always wise to carry on behaving the same way (Credit: Benedict Johnson)

“A very simple definition [of authenticity] is being true to self,” says Ibarra. “But self could be who I am today, who I’ve always been or who I might be tomorrow.”


People can use authenticity as an excuse for staying in their comfort zone, says Ibarra. Faced with change, “oftentimes they say ‘that’s not me’ and they use the idea of authenticity to not stretch and grow”.

People can use authenticity as an excuse for staying in their comfort zone

The ease with which you adapt your behaviour to fit new situations depends to what degree you’re a ‘chameleon’ or a ‘true-to-selfer’, according to Mark Snyder, a social psychologist at the University of Minnesota. He created a personality test to measure this, called the Self-Monitoring Scale.

Chameleons treat their lives as an opportunity to play a series of roles, carefully choosing their words and deeds to convey just the right impression, says Snyder. In contrast, true-to-selfers use their social dealings with others to convey an unfiltered sense of their personalities, he says.

(Credit: Getty Images)

‘Chameleons’ may change their tune to suit whoever’s in the room – but they are more likely to get ahead, says Mark Snyder (Credit: Getty Images)

The problem with ‘be yourself’ as careers advice is that chameleons have a bit of an edge, says Snyder. That’s because a lot of jobs, particularly ones that are at higher levels in corporations, call for acting and self-presentational skills that favour people who change their deeds to fit the situation.

Earning your stripes

Other research suggests it’s only as you progress up the career ladder that you have the licence, power and opportunity to be authentic. It takes time to earn what sociologists call “idiosyncrasy credits”.

“Senior people have tried, experimented, trial-and-errored different versions of self, found whatever works for them, and consolidated a style,” says Ibarra. “They advise students and junior staff to ‘be yourself’ with good intent, forgetting that it’s been a 30-year process.”

It’s not bad advice. It’s just not particularly useful advice

Part of the danger in simply telling people to ‘be yourself’ is that they might think that’s all they need to do, says Jeremiah Stone, a New York-based recruitment specialist at Hudson RPO.

(Credit: Getty Images)

‘Being yourself’ can only get you so far – you’ve got to be able to back it up (Credit: Getty Images)

“It doesn’t mean that you go into an interview or a workplace environment and you behave in the same way you would with your mates. It means that you are engaging authentically with other people, that they get a sense of who you are and what’s important to you and what your values are,” he says. “It’s not bad advice. It’s just not particularly useful advice”

Even Friedrich is unconvinced by ‘be yourself’ as words of wisdom – particularly for younger people. “The advice ‘be yourself’ – that’s starting in the middle. How can you be yourself if you don’t know yourself?” he says. “Get to know yourself and find out what makes you happy.”

To comment on this story or anything else you have seen on BBC Capital, please head over to our Facebook page or message us on Twitter.

PJSIP: Automatic Switch Transport type from UDP to TCP

We encounterred with SIP signaling commands lost issues recently in different terminals, environments, and scenarios.
And we were using UDP as our prior transport type.
The potential cause could be:
1. There were SIP commands which could larger than MTU size.
2. The send/recv queue buffer size of the socket handle was not enough.
3. SIP command(Conference control) were really tremendous

There are some informations about this issue, also could be a way out of this.

According to ​RFC 3261 section 18.1.1:
“If a request is within 200 bytes of the path MTU, or if it is larger than 1300 bytes and the path MTU is unknown, the request MUST be sent using an RFC 2914 congestion controlled transport protocol, such as TCP.”

if Request is Larger than 1300 bytes.

By this rule, PJSIP will automatically send the request with TCP if the request is larger than 1300 bytes. This feature was first implemented in ticket #831. The switching is done on request by request basis, i.e. if an initial INVITE is originally meant to use UDP but end up being sent with TCP because of this rule, then only that initial INVITE is sent with TCP; subsequent requests will use UDP, unless of course if it’s larger than 1300 bytes. In particular, the Contact header stays the same. Only the Via header is changed to TCP.
It could be the case that the initial INVITE is sent with UDP, and once the request is challenged with 401 or 407, the size grows larger than 1300 bytes due to the addition of Authorization or Proxy-Authorization header. In this case, the request retry will be sent with TCP.
In case TCP transport is not instantiated, you will see error similar to this:
“Temporary failure in sending Request msg INVITE/cseq=15228 (tdta02EB0530), will try next server. Err=171060 (Unsupported transport (PJSIP_EUNSUPTRANSPORT))
As the error says, the error is not permanent, as PJSIP will send the request anyway with UDP.
This TCP switching feature can be disabled as follows:
● at run-time by setting pjsip_cfg()->endpt.disable_tcp_switch to PJ_TRUE.
● at-compile time by setting PJSIP_DONT_SWITCH_TO_TCP to non-zero
You can also tweak the 1300 threshold by setting PJSIP_UDP_SIZE_THRESHOLD to the appropriate value.

The Decline of the Standards-Based Codec—and Good Riddance

Saw this post on Streaming Media Magazine, found we share a same opinion with HEVC, so forward it to my blog.

Online is different from broadcast and doesn’t need formal standards. HEVC isn’t considered by many online video streamers, as the future belongs to VP9 and AV1.

Elsewhere in the issue, you find a 4,000-word article I wrote on VP9 that doesn’t mention HEVC. Why? Because for the vast majority of streaming producers that don’t distribute 4K video to smart TVs, the codec decision isn’t VP9 versus HEVC. It’s H.264 versus VP9, and HEVC isn’t really in the picture.

This dynamic highlights the reality that standards-based codecs are declining in importance, particularly in the streaming space. The success of H.264, first with Flash and later with HTML5, merely masked this trend. That is, H.264 was wildly successful in streaming (and later HTML5) because Adobe selected it for Flash, not because it was a technology standard. This is a subtle but critical distinction. It’s also a very significant sea change.

My first job in the codec world involved marketing a proprietary fractal-based codec for use on CD-ROMs. Our biggest competition came from codecs such as Indeo and Cinepak, and from an emerging standard called MPEG-1. My company never got traction, and (according to ancient memory) the three companies that sold MPEG-1 codecs were all purchased for more than $40 million. The lesson burned into my brain was that standard-based codecs always win.

In this regard, there was never any question that MPEG-2 would be the codec for DVD and early cable and satellite systems. The next standard, H.264, was deployed in satellite and cable and all the associated STBs, and later in mobile devices and retail OTT devices such as Roku and Apple TV. H.264 was the best performing codec around, and by the time VP8 arrived, H.264 was impossibly entrenched. Plus, with a reasonable cap of about $5 million per year (back in 2010, now $8.125 million for 2016), H.264 royalties were affordable, ensuring ubiquitous playback.

Fast-forward to 2016. H.264 is still everywhere, but it’s showing its age. VP9 provides the same quality at 50 percent to 60 percent of the bandwidth, and playback is free in the current versions of all browsers except for Internet Explorer and Safari. The Alliance for Open Media launched in September 2015, consolidating the development of three open source codecs into one engineering group. Google, Mozilla, and Microsoft are founding members, ensuring fast browser support for the first codec (called AV1), which should ship by March 2017. Members Netflix, Amazon, and Google (YouTube) will ensure fast deployment by large web publishers, while members ARM, AMD, Intel, and NVIDIA presage prompt support in hardware.

AV1 is free, while HEVC costs up to $1.20 (or more) per unit with a cap of up to $65 million, and that’s just for the two (of potentially four or more) IP owners with announced terms. With VP9 and AV1 freely available, there is no need for HEVC to deliver to computers and notebooks, and there is no business case (or realistic business model) for licensing HEVC in a browser.

The mobile device landscape is less clear. Apple included HEVC in FaceTime but removed any mention of the technology from its spec sheets after the second HEVC patent group formed. This ensures that Apple will pay far more in HEVC royalties than it will ever receive, making a strong business case for deploying AV1. Android 5.0 includes HEVC software decoder, with hooks to HEVC hardware decoder. However, both royalties are paid by Android licensees, not Google, which is clearly banking on AV1 for the future of YouTube.

Broadcast infrastructures, set-top boxes (STBs), and smart TVs will remain HEVC for a while. But with YouTube choosing VP9/AV1 for its UHD videos and Netflix, Amazon, Microsoft, and the hardware vendors behind AV1, support for the alliance codec in future smart TVs and STBs is assured. HEVC certainly won’t be the only technology these devices support.

The bottom line is that broadcast, with its hundreds of disparate publishers and suppliers, needs a formal standard. The streaming world just needs a reliable, well-supported technology, so a de facto standard set by a group of technology leaders and users is just as good. In fact, it’s better, if you consider the price tag.

This article originally ran in the Autumn 2016 European edition of Streaming Media magazine as “The Decline of the Standards-Based Codec.”

An issue when collaborating with HUAWEI VP9650 with H.460

TE40 caller :, E.164: 02510000
TE40 caller :,  E.164: 02510000
H600 callee :,  E.164: 654320

Pcap file was captured on H600 side.

All exchanged signaling commands between H600 and VP9650:
…Twenty seconds later…
–>ReleaseComplete, DRQ

(h225 or h245) and ((ip.dst eq and ip.src eq or (ip.src eq and ip.dst eq

H600 after received TCS from VP9650 did not response with any further commands, and led to a ReleaseComplete of VP9650.

Trouble shooting:
Checked the facility commands of VP9650, found that its Q.931 crv value was 0, but with a facility reason of 5(startH245)

HUAWEI format of facility msg of H460 startH245
But we did not support that kind of rules.
Checked the ITU/T documents, found out it’s a standard procedure.

You know what should be done.

Android: dlopen fail due to has text relocations issue

For some reason, I found out some Apps I programmed several years ago, rebuilt them and put on my MI NOTE (Android 6.0) to run some tests.

Here is my cross compile environment:

  • NDK: former downloaded, r7c + r8b
  • SDK: newly downloaded, 24.4.1

But when I tried to run the App on my phone, I got an error like this:

02-15 14:42:58.540: I/OpenGLRenderer(3260): Initialized EGL, version 1.4
02-15 14:42:58.699: W/InputMethodManager(3260): Ignoring onBind: cur seq=164, given seq=163
02-15 14:43:06.718: I/Timeline(3260): Timeline: Activity_launch_request time:6144239
02-15 14:43:06.877: E/linker(3260): /data/app/rg4.net.onvifplayer-1/lib/arm/libffmpeg.so: has text relocations
02-15 14:43:06.897: D/FFMpeg(3260): Couldn't load lib: ffmpeg - dlopen failed: /data/app/rg4.net.onvifplayer-1/lib/arm/libffmpeg.so: has text relocations
02-15 14:43:06.905: E/linker(3260): /data/app/rg4.net.onvifplayer-1/lib/arm/libezgl.so: has text relocations
02-15 14:43:06.910: D/FFMpeg(3260): Couldn't load lib: ezgl - dlopen failed: /data/app/rg4.net.onvifplayer-1/lib/arm/libezgl.so: has text relocations
02-15 14:43:06.920: E/linker(3260): /data/app/rg4.net.onvifplayer-1/lib/arm/libeasyonvif.so: has text relocations
02-15 14:43:06.927: D/FFMpeg(3260): Couldn't load lib: easyonvif - dlopen failed: /data/app/rg4.net.onvifplayer-1/lib/arm/libeasyonvif.so: has text relocations
02-15 14:43:06.927: W/System.err(3260): rg4.net.onvifplayer.RSException: Couldn't load native libs
02-15 14:43:06.927: W/System.err(3260):     at rg4.net.onvifplayer.libEasyRTSP.<init>(libEasyRTSP.java:40)
02-15 14:43:06.927: W/System.err(3260):     at rg4.net.onvifplayer.PlayerActivity.<init>(PlayerActivity.java:33)
02-15 14:43:06.927: W/System.err(3260):     at java.lang.Class.newInstance(Native Method)
02-15 14:43:06.927: W/System.err(3260):     at android.app.Instrumentation.newActivity(Instrumentation.java:1068)
02-15 14:43:06.927: W/System.err(3260):     at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2322)
02-15 14:43:06.927: W/System.err(3260):     at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2481)
02-15 14:43:06.928: W/System.err(3260):     at android.app.ActivityThread.access$900(ActivityThread.java:153)
02-15 14:43:06.928: W/System.err(3260):     at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1349)
02-15 14:43:06.928: W/System.err(3260):     at android.os.Handler.dispatchMessage(Handler.java:102)
02-15 14:43:06.928: W/System.err(3260):     at android.os.Looper.loop(Looper.java:148)
02-15 14:43:06.928: W/System.err(3260):     at android.app.ActivityThread.main(ActivityThread.java:5432)
02-15 14:43:06.928: W/System.err(3260):     at java.lang.reflect.Method.invoke(Native Method)
02-15 14:43:06.928: W/System.err(3260):     at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:735)
02-15 14:43:06.928: W/System.err(3260):     at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:625)
02-15 14:43:06.956: E/linker(3260): /data/app/rg4.net.onvifplayer-1/lib/arm/libffmpeg.so: has text relocations
02-15 14:43:06.962: D/FFMpeg(3260): Couldn't load lib: ffmpeg - dlopen failed: /data/app/rg4.net.onvifplayer-1/lib/arm/libffmpeg.so: has text relocations
02-15 14:43:06.968: E/linker(3260): /data/app/rg4.net.onvifplayer-1/lib/arm/libezgl.so: has text relocations
02-15 14:43:06.974: D/FFMpeg(3260): Couldn't load lib: ezgl - dlopen failed: /data/app/rg4.net.onvifplayer-1/lib/arm/libezgl.so: has text relocations
02-15 14:43:06.985: E/linker(3260): /data/app/rg4.net.onvifplayer-1/lib/arm/libeasyonvif.so: has text relocations
02-15 14:43:06.991: D/FFMpeg(3260): Couldn't load lib: easyonvif - dlopen failed: /data/app/rg4.net.onvifplayer-1/lib/arm/libeasyonvif.so: has text relocations
02-15 14:43:06.991: W/System.err(3260): rg4.net.onvifplayer.RSException: Couldn't load native libs
02-15 14:43:06.991: W/System.err(3260):     at rg4.net.onvifplayer.libEasyRTSP.<init>(libEasyRTSP.java:40)
02-15 14:43:06.991: W/System.err(3260):     at rg4.net.onvifplayer.PlayerActivity.onCreate(PlayerActivity.java:65)
02-15 14:43:06.992: W/System.err(3260):     at android.app.Activity.performCreate(Activity.java:6303)
02-15 14:43:06.992: W/System.err(3260):     at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1108)
02-15 14:43:06.992: W/System.err(3260):     at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2374)
02-15 14:43:06.992: W/System.err(3260):     at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2481)
02-15 14:43:06.992: W/System.err(3260):     at android.app.ActivityThread.access$900(ActivityThread.java:153)
02-15 14:43:06.992: W/System.err(3260):     at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1349)
02-15 14:43:06.992: W/System.err(3260):     at android.os.Handler.dispatchMessage(Handler.java:102)
02-15 14:43:06.992: W/System.err(3260):     at android.os.Looper.loop(Looper.java:148)
02-15 14:43:06.992: W/System.err(3260):     at android.app.ActivityThread.main(ActivityThread.java:5432)
02-15 14:43:06.992: W/System.err(3260):     at java.lang.reflect.Method.invoke(Native Method)
02-15 14:43:06.992: W/System.err(3260):     at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:735)
02-15 14:43:06.992: W/System.err(3260):     at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:625)
02-15 14:43:06.994: E/art(3260): No implementation found for int rg4.net.onvifplayer.libEasyRTSP.NewInstance() (tried Java_rg4_net_onvifplayer_libEasyRTSP_NewInstance and Java_rg4_net_onvifplayer_libEasyRTSP_NewInstance__)

Solutions 1:

This issue could be solved by checking the targetSDKVersion in the manifest file.

Using “22” and not “23” as targetSDKVersion solved it. (See below)

        android:targetSdkVersion="22" />

I also checked the build.gradle files for compile version and targetSDKversion:

compileSdkVersion 22
    buildToolsVersion '22.0.1'

    defaultConfig {
        minSdkVersion 15
        targetSdkVersion 22

Solutions 2:

It was caused by the ffmpeg, and it could also be solved by patching the latest ffmpeg code


I took the latest from https://github.com/FFmpeg/FFmpeg

You will also need HAVE_SECTION_DATA_REL_RO declared somewhere in your build for the macro in asm.S to use the dynamic relocations option.

Further informations:

Previous versions of Android would warn if asked to load a shared library with text relocations:

“libfoo.so has text relocations. This is wasting memory and prevents security hardening. Please fix.”.

Despite this, the OS will load the library anyway. Marshmallow rejects library if your app’s target SDK version is >= 23. System no longer logs this because it assumes that your app will log the dlopen(3) failure itself, and include the text from dlerror(3) which does explain the problem. Unfortunately, lots of apps seem to catch and hide the UnsatisfiedLinkError throw by System.loadLibrary in this case, often leaving no clue that the library failed to load until you try to invoke one of your native methods and the VM complains that it’s not present.

You can use the command-line scanelf tool to check for text relocations. You can find advice on the subject on the internet; for example https://wiki.gentoo.org/wiki/Hardened/Textrels_Guide is a useful guide.

And you can check if your shared lbirary has text relocations by doing this:

readelf -a path/to/yourlib.so | grep TEXTREL

If it has text relocations, it will show you something like this:

0x00000016 (TEXTREL)                    0x0

If this is the case, you may recompile your shared library with the latest NDK version available:

ndk-build -B -j 8

And if you check it again, the grep command will return nothing.

Does ZTE T800 and HUAWEI TEx0 support T.140?

Both ZTE T800 and HUAWEI TEx0 claim to have T.140 supported, but when I digging into these entities by running some tests between T800, TE40 and TE60, my current status is I’m not persuaded.

Maybe only because I don’t know how to configure them to make T.140 enabled.

Here is some T.140 related information, and my steps to analysis to the protocols of HUAWEI TEx0 and ZTE T800.

A screen shot of HUAWEI TEx0’s administration manual about T.140.



1. T.140 related standard documents





6)RFC4103 – RTP Payload for Text Conversation.pdf

2. Major descriptions of implementing T.140

T.140 related descriptions in T-REC-H.323-200002-S!AnnG!PDF-E.

1) H.245 TCS for T.140

In the capabilities exchange, when using a reliable channel, specify:

DataApplicationCapability.application = t140
DataProtocolCapability = tcp

In the capabilities exchange, when using an unreliable channel, specify:

DataApplicationCapability.application = t140
DataProtocolCapability = udp

2) H.245 Open Logical Channel

In the Open Logical Channel procedure, specify:

OpenLogicalChannel.forwardLogicalChannelParameters = dataType
DataType = data

And select a reliable or unreliable channel for the transfer of T.140 data by specifying the DataApplicationCapability and the DataProtocolCapability as above.

According to the description in T-REC-H.224-200501-I!!PDF-E, there should be only one H.221 channel, we can still send multiple protocols, like FECC, T.120 and T.140, in one single channel, this type of channel has a name: H.221 MLP data channel.

3) Packetization of T.140 data

Reliable TCP mode: skipped because don’t find any newlly established TCP connections.

UnReliable mode: I do find an H.224 capability in both of these entities, since there is no OLC requests other than Audio, Video, and H.224 data.

Let’s suppose they are re-using the H.221 MLP data channel for both FECC and T.140 transmission.

4) H.224 protocol octet structure

H.224 protocol octet structure

5) H.224 -Standard Client ID Table

H.224 -Standard Client ID Table

3. H.224 data packets sending between TE60 and T800

I managed to extract the H.224 data packets from the PCAP file.

And they are like these:

7e 7e 7e 00 86 c0 00 00 00 00 00 03 40 00 81 a8 e8 0f b2 07 db 07 9f 9f 9f bf ff

Explain the packet by the standard document’s description:




7e 7e 7e Flag Flag
00 Upper DLCI Q.922 Address Header
86 Lower DLCI, 0x6 or 0x7 + EA
C0 UI Mode Q.922 Control Octet(s)
00 Upper Destination Terminal address Data Link Header
00 Lower Destination Terminal address
00 Upper Source Terminal address
00 Upper Source Terminal address
00 Standard Client ID
03 ES + BS
40 00 81 a8 e8 0f b2 07 db 07 9f 9f 9f bf ff Client data octet Client data octet

Comparing the extracted Standard Client ID with H.224 Standard Client ID Table, we can make out a conclusion for this packet: it’s a CME packet, not a T.140 packet.

Now, since we know how to identify the data type of H.224 data packets, we can judge all the H.224 data packet between TE60 and T800.

TE60 –> T800

7e 7e 7e 00 86 c0 00 00 00 00 00 03 40 00 81 a8 e8 0f b2 07 db 07 9f 9f 9f bf ff

7e 7e 7e 00 86 c0 00 00 00 00 00 03 80 00 80 81 12 c8 7e 7e 7e ff

7e 7e 7e 00 86 c0 00 00 00 00 00 03 40 00 81 a8 e8 0f b2 07 db 07 9f 9f 9f bf ff

7e 7e 7e 00 86 c0 00 00 00 00 00 03 40 00 81 a8 e8 0f b2 07 db 07 9f 9f 9f bf ff

7e 7e 7e 00 86 c0 00 00 00 00 00 03 40 fb c0 c8 a8 bf 3f 3f 7f ff

7e 7e 7e 00 86 c0 00 00 00 00 00 03 40 fb c0 c8 a8 bf 3f 3f 7f ff

T800 –> TE60

7e 7e 7e 00 8e c0 00 00 00 00 00 03 80 00 40 81 f7 00 00 5a 00 00 4c 50 3f 3f 3f 3f 3f 3f 16

7e 7e 7e 00 8e c0 00 00 00 00 00 03 40 00 81 68 a8 0f 92 07 cb 00 28 80 3d f1 ef cf cf cf cf cf cd

7e 7e 7e 00 8e c0 00 00 00 00 80 03 a0 08 0e 45 7e 7e 7e 7e 7e 7e


Among the listed packets, there’s only one packet not a CME packet, which Standard Client ID is 0x80.

According to T-REC-H.323-200002-S!AnnG!PDF-E.pdf, we should reverse the octet value by bit to get the real value, and the reversed real value would be 0x01, after check with the Standard Client ID, we know it’s a FECC packet, still not T.140.


God, I lost. Anyone can tell me how to get T.140 work on ZTE T800 and HUAWEI TE60?

An example of AAC capability in H.245

Got mails continuously from everywhere throwing question to me about AAC audio in H.323.

So I arranged this post to example my previous posts: http://rg4.net/archives/1480.htmlhttp://rg4.net/archives/1126.htmlhttp://rg4.net/archives/1112.html

The pcap file for this example can be downloaded here: HUAWEI_TE600-vs-ZTE_T800.pcapnp

Here it is.

1. Basic knowledge: AAC LD descriptions in 14496-3

It operates at up to 48 kHz sampling rate and uses a frame length of 512 or 480 samples, compared to the 1024 or 960 samples used in standard MPEG-2/4 AAC to enable coding of general audio signals with an algorithmic delay not exceeding 20 ms. Also the size of the window used in the analysis and synthesis filterbank is reduced by a factor of 2.

And Table 1.3 — Audio Profiles definition of 14496-3 explained AAC format definition, AAC LC or AAC LD.

2. Basic knowledge: AAC capability in description of H.245 TCS

maxBitRate: 640
ProfileAndLevel: nonCollapsing item –> parameterIdentifier: standard = 0
AAC format: nonCollapsing item –> parameterIdentifier: standard = 1
AudioObjectType: nonCollapsing item –> parameterIdentifier: standard = 3
Config(Including sample rate and channel parameters): nonCollapsing item –> parameterIdentifier: standard = 4
MuxConfig: nonCollapsing item –> parameterIdentifier: standard = 8

3. H.245 TCS of HUAWEI TE60 and ZTE T800

There are two AAC capabilities:
Capability 1:
collapsing item –> parameterIdentifier=2, parameterValue=2
collapsing item –> parameterIdentifier=5, parameterValue=1
ProfileAndLevel: 24
AAC format: logical (0)
AudioObjectType: 23

Capability 2:
collapsing item –> parameterIdentifier=2, parameterValue=2
collapsing item –> parameterIdentifier=5, parameterValue=1
ProfileAndLevel: 24
AudioObjectType: 23

ZTE T800:
There are four AAC capabilities:
Capability 1:
Capability 2:
Capability 3:
Capability 4:

4. Detail parameters in OLC command

TE60 OLC to T800:
maxBitRate: 1280
item 0 –> parameterIdentifier=2, parameterValue=2
item 1 –> parameterIdentifier=5, parameterValue=1
item 0 –> parameterIdentifier=0, value=25
item 1 –> parameterIdentifier=1, value=logical (0)
item 2 –> parameterIdentifier=3, value=23
item 3 –> parameterIdentifier=6, value=logical (0)
item 4 –> parameterIdentifier=8, octetString = 41 01 73 2a 00 11 00
item 5 –> parameterIdentifier=9, octetString = 00 00 00

AOT=23 –> AAC LD
MuxConfig = 41 01 73 2a 00 11 00 –> LATM format
Sample rate = (MuxConfig[2]&0x0f) = 0x73 & 0x0f = 3 = 48K Hz
Channel = (MuxConfig[3]&0xf0)>>4 = (0x2a & 0xf0) >> 4 = 0x20 >> 4 = 2 = Stereo

HUAWEI sent open logical channel with AAC LD stereo to ZTE.

T800 OLC to TE60:
maxBitRate: 1280
item 0 –> parameterIdentifier=2, parameterValue=2
item 1 –> parameterIdentifier=5, parameterValue=1
item 0 –> parameterIdentifier=0, value=25
item 1 –> parameterIdentifier=1, value=logical (0)
item 2 –> parameterIdentifier=3, value=23
item 3 –> parameterIdentifier=6, value=logical (0)
item 4 –> parameterIdentifier=8, octetString = 41 01 73 1a 00 11 00
item 5 –> parameterIdentifier=9, octetString = 00 00 00

AOT=23 –> AAC LD
MuxConfig = 41 01 73 1a 00 11 00 –> LATM format
Sample rate = (MuxConfig[2]&0x0f) = 0x73 & 0x0f = 3 = 48K Hz
Channel = (MuxConfig[3]&0xf0)>>4 = (0x1a & 0xf0) >> 4 = 0x10 >> 4 = 1 = Mono

ZTE sent open logical channel with AAC LD mono to HUAWEI.
Any furture questions?

How to Change the Buffer on VLC

The VLC media player includes file cache and stream buffer options to enable fine-grained control over video playback on machines with limited system resources. If you use VLC to stream network video, you can set the buffer size on a per-stream or permanent basis. For local file playback, you can raise or lower the file cache size to limit the amount of memory VLC uses or the frequency with which it accesses the disk. For systems with low memory, a low cache setting makes more resources available to the operating system.

Permanently Change the Streaming Buffer

  • Click “Tools” and select “Preferences.” In the lower left of the Preferences dialog, select the “All” button under “Show Settings” to display the advanced settings.
  • Select “Stream Output” from the sidebar menu. The setting that affects buffer size is labeled “Stream Output Muxer Caching.”
  • Enter a new amount in milliseconds in the Muxer Caching field. Since this setting requires a value in milliseconds, the amount of memory it uses varies with the streaming video’s quality. If you have ample RAM but a slow network connection, a high setting such as 2,000 ms to 3,000 ms is safe. You may need to experiment to find the right setting for your machine.

Change the Buffer for Individual Streams

  • Press “Ctrl-N” to open a new network stream, then enter a URL in the address field. VLC supports HTTP, FTP, MMS, UDP and RTSP protocols, and you must enter the full URL in the address field.
  • Select “Show More Options” to display advanced settings for the current network stream. The Caching option controls the streaming buffer size.
  • Enter an amount in milliseconds in the Caching field, then click “Play.” Depending on the cache setting, the video may take a few seconds to start streaming.

Change the Buffer for Local Files

  • Click “Tools” and select “Preferences.” In the lower left of the Preferences dialog, select the “All” button under “Show Settings” to display the advanced settings.
  • Select “Input / Codecs” from the sidebar menu, then scroll to the Advanced section in the Input / Codecs panel.
  • Enter a new amount in the File Caching field. The default setting is 300 ms, which results in VLC accessing your disk three times per second. If video playback stutters on your machine, increasing this setting can make it smoother. However, depending on your RAM and CPU resources, you may need to experiment to find the right setting.

Tips & Warnings

  • Information in this article applies to VLC 2.1.5. It may vary slightly or significantly with other versions.

Source: http://www.ehow.com/how_8454118_change-buffer-vlc.html

Vendor ID, Product ID information in SIP

As you may know, to be a robust meeting entity, we must take good care of compatibility requirements for different facilities from different manufacturers.

In H.323 protocol, we can use fields like Vendor ID, Product ID, Version ID in the signaling commands.

But how to do this when you are using SIP protocol?

  1. Definitions in RFC 3261

20.35 Server

   The Server header field contains information about the software used

   by the UAS to handle the request.

   Revealing the specific software version of the server might allow the

   server to become more vulnerable to attacks against software that is

   known to contain security holes. Implementers SHOULD make the Server

   header field a configurable option.


      Server: HomeServer v2

20.41 User-Agent

   The User-Agent header field contains information about the UAC

   originating the request.  The semantics of this header field are

   defined in [H14.43].

   Revealing the specific software version of the user agent might allow

   the user agent to become more vulnerable to attacks against software

   that is known to contain security holes.  Implementers SHOULD make

   the User-Agent header field a configurable option.


      User-Agent: Softphone Beta1.5



  1. [H14.43] User-Agent definition in RFC2616

14.43 User-Agent

The User-Agent request-header field contains information about the user agent originating the request. This is for statistical purposes, the tracing of protocol violations, and automated recognition of user agents for the sake of tailoring responses to avoid particular user agent limitations. User agents SHOULD include this field with requests.

The field can contain multiple product tokens (section 3.8) and comments identifying the agent and any subproducts which form a significant part of the user agent. By convention, the product tokens are listed in order of their significance for identifying the application.

User-Agent     = “User-Agent” “:” 1*( product | comment )


User-Agent: CERN-LineMode/2.15 libwww/2.17b3



  1. How TANDBERG and Polycom implemented?

User-Agent format of TANDBERG 775
Server format of TANDBERG 775


User-Agent format of Polycom

So, jump to the conclusion:

  1. As UAC, identify yourself in User-Agent field.
  2. As UAS, identify yourself in Server field.

Comparing with TANDBERG and POLYCOM’s implementation, TANDBERG format is more proper.

This CEO says her riskiest career move was becoming an engineer

I like this words: “There’s never a perfect time for anything—you just have go for it and keep your eyes on your goal.”

girlswhocodeWritten by Anne Kreamer
August 25, 2015

In the US, where only 11% of working engineers are women and fewer than 5% of the CEOs of the 500 biggest companies are female, Jennifer Van Buskirk, the president of Cricket Wireless, a subsidiary of AT&T, is something of a freak. In a good way.

Van Buskirk, 42, chose a course of study that many thought was then “risky” for a woman—becoming an engineer. In 1991, when she entered Virginia Tech, she says women and minorities comprised only 15% of engineering students (pdf) nationwide and were much less likely than men to be employed in engineering once they did graduate.

“I was definitely not taking the easy route,” Van Buskirk told Quartz. “I was typically one of the only women in my college classes and often had to work harder than my male counterparts to be heard.”

She says she found the experience exhilarating. “I really liked breaking the mold and challenging the stereotypes about women. Probably because I was confident in my analytical skills and my ability to learn and adapt, so no matter what obstacles were thrown my way, I knew I could figure a way around them.”

In her approach to her education and prospective career, Van Buskirk intuitively understood two critical components of successful risk-taking. She embraced what Stanford University psychology professor Carol Dweck calls a “growth” mind-set, which is a belief in one’s ability to learn, change, and handle challenges. The other attribute is what University of Pennsylvania psychologist Angela Duckworth, calls “grit,” which is “the sustained and focused application of talent over time.”

When AT&T acquired Cricket from Leap Wireless in 2013 the company was losing about a million subscribers a year, but under Van Buskirk’s leadership, it now boasts 5 million subscribers. What she considers her greatest risk, taking a job she knew nothing about, has proven to be worth taking.

I asked Van Buskirk about how she’s navigated professional risk and here’s what she said.

Van Buskirk:

“I’ve taken a lot of perceived risks in my career.

In fact, that sense of confidence, which was formed in me at an early age, has been the lynchpin of my career success. It has enabled me to embrace risk and leverage it versus run from it. In 2005, I put that confidence to the test when I responded to a request from Ralph de la Vega, our current head of Mobility & Business Solutions at AT&T. Back then, Ralph wanted me to interview for his chief of staff role. While this probably doesn’t sound like a very risky move—especially when you compare it to starting Aio Wireless or Cricket Wireless–it sure felt like it—for two reasons.

Reason number one: I knew nothing about the job—literally, zero. And reason number two: I was seven months pregnant at the time.

I was pretty certain I could learn the role, but I remember looking at my pregnant belly and saying to myself, ‘I’m never going to get this job – who would hire me like this?’ And even if I did get hired, would it be career-suicide to jump into a demanding, high profile, new role just to step out to give birth, then jump back in? How much pressure would I be putting on my family and myself in order to make all this work?

And, of course, there were the inevitable skeptics who echoed those doubts. But I realized there’s never a perfect time for anything—you just have go for it and keep your eyes on your goal. And my goal was to be a leader and, as such, do something impactful for the company.

I ended up getting the chief of staff job, which led to other roles within AT&T, which, eventually, brought me to where I am today, president of Cricket Wireless. By staying confident and believing in myself and my capabilities; by ensuring that I don’t let others define me; and by embracing change instead of avoiding it, I’ve been able to chart my own course. And that course has included leading Cricket Wireless—one of the most successful consumer brands in the wireless industry.

I’ve taken what many perceive as big risks in my career, but I’ve always viewed them as opportunities. I’ve tackled challenges head-on and I’ve learned to be comfortable with being uncomfortable. I think that’s the key: don’t let perception become reality. If you look for the opportunity, you can always mitigate the risk.”

We welcome your comments at ideas@qz.com.

Source: http://qz.com/486324/this-ceo-says-her-riskiest-career-move-was-becoming-an-engineer

Trouble shooting: step by step to analysis crashes

This post’s goal is to guide a starter to analysis a crash by reading into the assemble code.

But the example listed here is not a good one, because the crash point is not an obvious one, the real reason of the crash for this example is still remain uncovered.

My point here is, we can use a such kind of way to analysis some crash, and once you read this post, you can start the first step. If you run into any problems when analysis your crash, well, we can discuss wih them together here. Here we go. Continue reading “Trouble shooting: step by step to analysis crashes”

A common bug of HD3 series terminals

An issue of call establishment delay when conferencing with Polycom MCU RMX2000

The situation was

1. Meeting entities
1). Polycom MCU: Polycom RMX 2000, version ID: 8.3.0
2). Kedacom HD3 H600 SP4

2. Call scenario
HD3 joined a multi-point conference with RMX2000.
1) All the H.225 and H.245 processes were OK.
2) OLC request from both side returned with ACK.
3) The audio packets could be captured right after the OLC ACK.
4) The video packets from HD3 sent right after got the OLC ACK from the MCU.
5) HD3 could not receive any video packets from the MCU.
6) HD3 waiting for a terminalYouAreSeeing conferenceIndication from the MCU to switch the status to InConf…
7) 20 seconds later we finally got the terminalYouAreSeeing indication. Along with the terminalYouAreSeeing, we got the video.

Seems the MCU was waiting for a command to switch its status to an established-mode.
But we just don’t know what it is, even after tested lot’s of MTs from Polycom, Tendburg, HUAWEI, ZTE, which all of them just working fine.

What we got is only that it must be a HD3’s bug.

After a long long times comparison with the pcap file, the only difference was the H.224 channel.
We did not open the H.224(FECC) channel together with the audio, video and H.239 channel, this caused the RMX2000 to wait 20 seconds to send the terminalYouAreSeeing indication.
It’s a yet-another-long-existing bug, we survived a long time, but today we finally ran into the consequence.

PCAP file: a-common-bug-of-hd3-series-terminals.pcap

RTCP and AVPF related missing features

Most of the missing features are AVPF related, which is defined in RFC4585 and RFC5104.

RFC4585: Extended RTP Profile for Real-time Transport Control Protocol (RTCP)-Based Feedback (RTP/AVPF)
RFC5104:  Codec Control Messages in the RTP Audio-Visual Profile with Feedback (AVPF)

AVPF contains a mechanism for conveying such a message, but did not specify for which codec and according to which syntax the message should conform.  Recently, the ITU-T finalized Rec.H.271, which (among other message types) also includes a feedback message.  It is expected that this feedback message will fairly quickly enjoy wide support.  Therefore, a mechanism to convey feedback messages according to H.271 appears to be desirable.

RTCP Receiver Report Extensions
1. CCM – Codec Control Message
2. FIR – Full Intra Request Command
A Full Intra Request (FIR) Command, when received by the designated
media sender, requires that the media sender sends a Decoder Refresh
Point (see section 2.2) at the earliest opportunity.  The evaluation
of such an opportunity includes the current encoder coding strategy
and the current available network resources.

FIR is also known as an “instantaneous decoder refresh request”,
“fast video update request” or “video fast update request”.

3. TMMBR – Temporary Maximum Media Stream Bit Rate Request
4. TMMBN – Temporary Maximum Media Stream Bit Rate Notification

Example from RFC5104:

Receiver A: TMMBR_max total BR = 35 kbps, TMMBR_OH = 40 bytes
Receiver B: TMMBR_max total BR = 40 kbps, TMMBR_OH = 60 bytes

For a given packet rate (PR), the bit rate available for media
payloads in RTP will be:

Max_net media_BR_A =
TMMBR_max total BR_A – PR * TMMBR_OH_A * 8 … (1)

Max_net media_BR_B =
TMMBR_max total BR_B – PR * TMMBR_OH_B * 8 … (2)

For a PR = 20, these calculations will yield a Max_net media_BR_A =
28600 bps and Max_net media_BR_B = 30400 bps, which suggests that
receiver A is the limiting one for this packet rate.  However, at a
certain PR there is a switchover point at which receiver B becomes
the limiting one.  The switchover point can be identified by setting
Max_media_BR_A equal to Max_media_BR_B and breaking out PR:

TMMBR_max total BR_A – TMMBR_max total BR_B
PR =  ——————————————- … (3)

5. TSTR – Temporal-Spatial Trade-off Request

5. TSTN – Temporal-Spatial Trade-off Request

6. VBCM – H.271 Video Back Channel Message

7. RTT – Round Trip Time
A receiver that receives a request closely after
sending a decoder refresh point — within 2 times the longest round
trip time (RTT) known, plus any AVPF-induced RTCP packet sending
delays — should await a second request message to ensure that the
media receiver has not been served by the previously delivered
decoder refresh point.  The reason for the specified delay is to
avoid sending unnecessary decoder refresh points.

8a. PLI – Picture Loss Indication
8b. SLI – Slice Loss Indication
8c. RPSI – Reference Picture Selection Indication

Here’s a sample INVITE command relaying from FreeSWITCH:

INVITE sip:1009@;transport=tcp SIP/2.0
Via: SIP/2.0/TCP;branch=z9hG4bK6p37yQX86QXar
Route: <sip:1009@>;transport=tcp
Max-Forwards: 69
From: "Extension 1008" <sip:1008@>;tag=DKK4FpBB3ptSS
To: <sip:1009@;transport=tcp>
Call-ID: 0199ec1f-9e53-1233-8583-000c29f7d152
CSeq: 77747697 INVITE
Contact: <sip:mod_sofia@;transport=tcp>
User-Agent: FreeSWITCH-mod_sofia/1.7.0+git~20150614T062551Z~a647b42910~64bit
Supported: timer, path, replaces
Allow-Events: talk, hold, conference, presence, as-feature-event, dialog, line-seize, call-info, sla, include-session-description, presence.winfo, message-summary, refer
Content-Type: application/sdp
Content-Disposition: session
Content-Length: 495
X-FS-Support: update_display,send_info
Remote-Party-ID: "Extension 1008" <sip:1008@>;party=calling;screen=yes;privacy=off

o=FreeSWITCH 1436150633 1436150634 IN IP4
c=IN IP4
t=0 0
m=audio 16890 RTP/AVP 96 0 8 101
a=rtpmap:96 opus/48000/2
a=fmtp:96 useinbandfec=1; stereo=0; sprop-stereo=0
a=rtpmap:0 PCMU/8000
a=rtpmap:8 PCMA/8000
a=rtpmap:101 telephone-event/8000
a=fmtp:101 0-16
m=video 22404 RTP/AVP 96
a=rtpmap:96 H264/90000
a=fmtp:96 profile-level-id=42801F
a=rtcp-fb:96 ccm fir tmmbr
a=rtcp-fb:96 nack
a=rtcp-fb:96 nack pli

Sample SDP of WebRTC for Firefox

GET /socket.io/1/websocket/GgKg1qt9TCXtfPb6n2g0 HTTP/1.1
User-Agent: Mozilla/5.0 (Windows NT 5.1; rv:36.0) Gecko/20100101 Firefox/36.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: zh-CN,zh;q=0.8,en-US;q=0.5,en;q=0.3
Accept-Encoding: gzip, deflate
Sec-WebSocket-Version: 13
Sec-WebSocket-Key: pPQe97SI5k09yaPnVLa2RQ==
Connection: keep-alive, Upgrade
Pragma: no-cache
Cache-Control: no-cache
Upgrade: websocket

HTTP/1.1 101 Switching Protocols
Upgrade: websocket
Connection: Upgrade
Sec-WebSocket-Accept: Lm5/9dDv4pjyphQQeswS+V+AiKc=

...Q^.U...BK......IIP.F....Q'.A...z..T5:::{"name":"log","args":[[">>> Message from server: ","Room foo has 1 client(s)"]]}.`5:::{"name":"log","args":[[">>> Message from server: ","Request to create or join room","foo"]]}.$5:::{"name":"joined","args":["foo"]}.B5:::{"name":"emit(): client GgKg1qt9TCXtfPb6n2g0 joined room foo"}..$.N...t _. {I.l ..+iW.)...l{V.=8..l}K.noW.<:I.*sE..g.Z5:::{"name":"log","args":[[">>> Message from server: ","Got message: ","got user media"]]}.~.x5:::{"name":"message","args":[{"type":"offer","sdp":"
o=mozilla...THIS_IS_SDPARTA-38.0 2820695485956467000 0 IN IP4
t=0 0
a=fingerprint:sha-256 7C:7B:AE:C2:AE:ED:14:39:A4:7A:EE:4B:FB:FE:90:90:E8:A1:0B:C1:50:FC:C8:9C:FA:28:68:22:EE:1C:F6:97
a=group:BUNDLE sdparta_0 sdparta_1
a=msid-semantic:WMS *
m=audio 9 RTP/AVP 109 9 0 8
c=IN IP4
a=extmap:1 urn:ietf:params:rtp-hdrext:ssrc-audio-level
a=msid:{69b2b229-1dc0-4291-a703-aafe505d477b} {ebc6bb1c-8525-4a70-9601-354b53c5c103}
a=rtpmap:109 opus/48000/2
a=rtpmap:9 G722/8000/1
a=rtpmap:0 PCMU/8000
a=rtpmap:8 PCMA/8000
a=ssrc:4051396866 cname:{f0f8a3ab-8c54-4694-872a-98dd14f0c821}
m=video 9 RTP/AVP 126 97
c=IN IP4
a=fmtp:120 max-fs=12288;max-fr=60
a=fmtp:126 profile-level-id=42e01f;level-asymmetry-allowed=1;packetization-mode=1
a=fmtp:97 profile-level-id=42e01f;level-asymmetry-allowed=1
a=msid:{69b2b229-1dc0-4291-a703-aafe505d477b} {34f61d33-9fe4-42ff-8e2b-ef9c465c6f67}
a=rtcp-fb:120 nack
a=rtcp-fb:120 nack pli
a=rtcp-fb:120 ccm fir
a=rtcp-fb:126 nack
a=rtcp-fb:126 nack pli
a=rtcp-fb:126 ccm fir
a=rtcp-fb:97 nack
a=rtcp-fb:97 nack pli
a=rtcp-fb:97 ccm fir
a=rtpmap:126 H264/90000
a=rtpmap:97 H264/90000
a=ssrc:3993721606 cname:{f0f8a3ab-8c54-4694-872a-98dd14f0c821}



Restore blogroll function for your WordPress

Found some features were missing after upgraded my blog to the latest version, WordPress v4.0, such as blogrolls.
Tried to find it all over the dashboard, but got no luck.
So I turned to the almighty Google, finally got the solution, and it’s really easy. All you need to do is adding the following code to the end of functions.php of your theme: