UniSVR is shutting down Shanghai office

April 1, 2015, a bright sunny day at Shanghai.
Today is supposed to be a kidding day, however, not got so much April Fool’s Day news as usual, but an astonishing news, UniSVR is going to shut down the Shanghai office in a month.

Sign, lament, or not, UniSVR Shanghai branch will be a history, finally annoncing and revealing the ending of a 15-year-old branch/office, where I participanted along with so much guys/gals there for more than ten years.

Sorry, regret, or not, life should and will go on. I talked with my previous boss, Mars Chen, VP of UniSVR, responsible for product and surveilance product line trategy, just minutes earlier. We did not talk about why, because we all knew it, it’s about business, it’s about life and living, and life always goes on.

Sad, pain, or not, let us all move on. Twelve years ago, the most shocking news was Leslie Cheung(张国荣) left us, it was really sad, and we could do nothing about it. Today, we moved on. One month later, UniSVR Shanghai will be closed, we can do nothing either, we will also move on. Besides this might leave UniSVR a clearer and brighter furture, I’m not saying this because I’m already left UniSVR, but truely from my heart.

And, I do care of UniSVR, although I resigned from UniSVR about two years ago.
I’m still keeping paying close attention to UniSVR.

And, lot’s of things happened in the almost two years.
In the two years, digital surveilance is no longer the tragical product for UniSVR.
In the two years, the bussiness in China mainland was keeping shrinking.
In the two years, lot’s of key employees left, new one came and left.
In the two years, developing work of IoT product, which is considered as the furture of UniSVR, was transfered from Shanghai to Beijing and Hsinchu.
Most importantly, some of us shared our youngness in UniSVR, especially, guys like Michael, Maggie, etc.

And we both knew
In the 15 years, UniSVR was once great not only in the global marketing, but also in China mainland.
In the 15 years, no matter where you sit in the orgnization chart, and no matter he/she worked hard or not, deep down in the heart, every UniSVRer kept fighting for a bright furture for both themself and UniSVR China.

I do feel sorry for myself that I didn’t get a chance to send my best wishes to UniSVR two years ago when I left UniSVR.
Back at that time, I planned myself a lot for the farewell, but the thing is not everything goes as you planned, so I choosed silience, only left a post on my blog, http://rg4.net/archives/455.html.
This time, allow me to speak it sincerely and loudly, God bless UniSVR, wish UniSVR a bright furture.

It seems today is even harder than the day I left UniSVR, it’s definitely a sleepless night for me, though I’m not know what I am thinking about, writing about.

QKD – How Quantum Cryptography Key Distribution Works

Forwarded from: https://howdoesinternetwork.com/2016/quantum-key-distribution


QKD – Quantum key distribution is the magic part of quantum cryptography. Every other part of this new cryptography mechanism remains the same as in standard cryptography techniques currently used.

By using quantum particles which behave under rules of quantum mechanics, keys can be generated and distributed to receiver side in completely safe way. Quantum mechanics principle, which describes the base rule protecting the exchange of keys, is Heisenberg’s Uncertainty Principle.

Heisenberg’s Uncertainty Principle states that it is impossible to measure both speed and current position of quantum particles at the same time. It furthermore states that the state of observed particle will change if and when measured. This fairly negative axiom which says that measurement couldn’t be done without perturbing the system is used in positive way by quantum key distribution.

It a real communication system, if somebody tries to intercept photon-powered communication so that it can get the crypto key which is being generated by this photon transfer, it will need to squeeze transferred photons through its polarization filter to read information encoded on them. As soon as it tries with wrong filter it will send forward the wrong photon. Sender and receiver will notice the disparity in exchanged data and interpret it as detection of interception. They will then restart the process of new crypto key generation.

Photon, and how it is used?

1) Photon – Smallest particle of light is a photon. It has three types of spins: horizontal, vertical and diagonal which can be imagined as right to left polarization.

2) Polarization – Polarization is used to polarize a photon. Polarize the photon means to filter the particle through polarization filter in order to filter out unwanted types of spins. Photon has all three spin states at the same time. We can manipulate the spin of a photon by putting the filter on its path. Photon, when passed through the polarization filter, has particular spin that filter lets through.

3) Spin – The Spin is usually the most complicated property to describe. It is a property of some elementary particle like electron and photon. When they move through a magnetic field, they will be deflected like they have same properties of little magnets.

If we take classical world for example, a charged, spinning object has magnetic properties. Elementary particles like photons or electrons have similar properties. We know that by the rules of quantum mechanics that elementary particles cannot spin. Regardless the inability to spin, physicists named the elementary particle magnetic properties “spin”. It can be a bit misleading but it helps to learn the fact that photon will be deflected by magnetic field. The photon’s spin does not change and it can be manifested in two possible orientations.

4) LED – light emitting diodes are used to create photons in most quantum-optics experiments. LEDs are creating unpolarized (real-world) light.

Modern technology advanced and today it is possible to use LEDs as source of single photon. In this way string of photons is created which will then be used in quantum channel for key generation and distribution in quantum key distribution process between sender and receiver.

Normal optic networking devices use LED light sources which are creating photon bursts instead of individual photons. In quantum cryptography one single photon at a time needs to be sent in order to have the chance to polarize it on the entrance into optic channel and check the polarization on the exit side.

Data Transmission Using Photons

Most technically challenging part of data transmission encoded in individual photon is the technique to read the encoded bit of data out from each photon. How’s possible to read the bit encoded in the photon when the very essence of quantum physics is making the measurement of quantum state impossible without perturbations? There is an exception.

We attach one bit of data to each photon by polarizing each individual photon. Polarizing photons is done by filtering photon through polarization filter. Polarized photon is send across quantum channel towards receiver on other side.

Heisenberg’s Uncertainty Principle come into the experiment with the rule that photon, when polarized, cannot be measured again because the measurement will change its state (ratio between different spins).

Fortunately, there is an exception in Uncertainty Principle which enables the measurement but only in special cases when measurement of the photon spin properties is done with a device (filter in this case) whose quantum state is compatible with measured particle.

In a case when photons vertical spin is measured with diagonal filter, photon will be absorbed by the filter or the filter will change photon’s spin properties. By changing the properties photon will pass through the filter but it will get diagonal spin. In both cases information which was sent from sender is lost on receiver side.

The only way to read photons currently encoded bit/spin is to pass it through the right kind of filter. If polarized with diagonal polarization (X) the only way to read this spin is to pass the photon through diagonal (X) filter. If vertical filter (+) is used in an attempt to read that photon polarization, photon will get absorbed or it will change the spin and get different polarization as it did on the source side.

List of spin that we can produce when different polarization filter is used:

  •   Linear Polarization (+)
  •   Horizontal Spin (–)
  •   Vertical Spin (|)
  •   Diagonal Polarization (X)
  •   Diagonal Spin to the left (\)
  •   Diagonal Spin to the right (/)

Key Generation or Key Distribution

The technique of data transmission using photons in order to generate a secure key at quantum level is usually referred as Quantum Key Distribution process. Sometimes QKD is also wrongly referenced as Quantum Cryptography. QKD is only a part of Quantum Crypto.

Key Distribution/Generation using photon properties like spin is solved by Quantum Key Distribution protocols allowing the exchange of a crypto key with – laws of physics guaranteed – security. When finally generated, key is absolutely secure and can be further used with all sorts of conventional crypto algorithms.

Quantum Key Distribution protocols that are commonly mentioned and mostly in use in today’s implementations are BB84 protocol and SARG protocol.

BB84 is the first one invented and it is still commonly used. It is the first one to be described in the papers like this one which are trying to describe how Quantum key exchange works. SARG was created later as an enhancement which brought a different way of key sifting technique which is described later in this paper.

1) Attaching Information bit on the photon – Key Exchange

Key Exchange phase, sometimes referred as Raw Key Exchange giving the anticipation of future need for Key Sifting is a technique equal for both listed Quantum Key Distribution protocols BB84 and SARG. To be able to transfer numeric (binary) information across quantum channel we need to apply specific encoding to different photon states. For Example, encoding will be applied as in the Table 1 below making different photon spin carry different binary value.


Table 1 – QKD – Encoding of photon states

In the process of key distribution, first step is for sender to apply polarization on sent photons and take a note of applied polarization. For this to be an example, we will take the Table 2 below as the list of sent photons with their polarization information listed.

Table 2 – QKD – Encoded photons

Sender sent binary data:

0 1 0 0 0 0 0 1 0 0 1 1 1 0 0 1 0 0 1 1

If the system will work with integers this data can be formatted in integer format:

Table 3- Binary to Decimal Conversion Table

Sender sent a key 267155 but it is just the start of the key generation process in which this key will be transformed from firstly sent group of bits ( 0 1 0 0 0 0 0 1 0 0 1 1 1 0 0 1 0 0 1 1 ) to the real generated and secured key.

2) Reading Information bits on the receiver side

The question arises on how can we use above described properties of photon and still be able to actually read it on the receiver side. In the above step, photons with the information attached to them were sent to the receiver side.

The next step will describe how quantum key distribution, and with that the whole quantum cryptography, works.

While sending, a list is made, list containing each sent photon, sent from sender to receiver and polarized with specific spin (encoded a bit of information on each photon).

In optimal case, when sender sends a photon with vertical spin and receiver also applies vertical filter in the time of photon arrival, they will successfully transfer a bit of data using quantum particle (photon). In a less optimal case when a photon with vertical spin is measured with diagonal filter the outcome will be photon with diagonal spin or no photon at all. The latter will happen if the photon is absorbed by the filter. In this case, transferred bit of data will later get dumped in the phase of key sifting or key verification.

3) Key Verification – Sifting Key Process

Key sifting phase or Key verification is a technique made differently with listed Quantum Key Distribution protocols BB84 and SARG. In the last section, a less optimal case when a photon with vertical spin is measured with diagonal filter was described. The outcome of that photon, which is sent with vertical spin, measurement done with diagonal spin, will give to the receiver a photon with diagonal spin or no photon at all.

Key verification comes into play now and it is usually referred as Key Sifting process.

In BB84 protocol receiver will communicate with sender and give him the list of applied filters for every received photon. Sender will analyze that list and respond with a shorter list back. That list is made by leaving out the instances where sender and receiver used different filters in single photon transfer.

In SARG protocol receiver will give to sender the list of results he produced from each received photons without sending filter orientation used (difference from BB84). Sender then needs to use that list plus his applied polarization while sending to deduce the orientation of the filter used by receiver. Sender then unveils to the receiver for which transfers he is able to deduce the polarization. Sender and receiver will discard all other cases.

In this whole process, sending of polarized photons is done through special line of optical fiber cable.

If we take BB84 for example, Key sifting process is done by receiver sending to the sender only the list of applied polarization in each photon transfer. Receiver does not send the spin or the value he got as a result from that transfer. Having that in mind, it is clear that communication channel for key verification must not be a quantum channel but rather a normal communication channel with not even the need to have encryption applied. Receiver and sender are exchanging the data that is only locally significant to their process of deducing in which steps they succeeded to send one polarized photon and read the photon one bit of information on the other side.

In the end of Key Sifting process, taking that no eavesdropping happened, both sides will be in possession of exactly the same cryptographic key. The key after sifting process will be half of the original raw key length when BB84 is used or a quarter with SARG. Other bits will be discarded in the sifting process.

Communication Interception – Key Distillation

1) Interception Detection

If a malicious third party wants to intercept the communication between two sides, in order to read the information encoded, he will have to randomly apply polarization on transmitted photons. If polarization is done, this third party will need to forward photons to the original sender. As it is not possible to guess all polarization correctly, when sender and receiver validate the polarization, receiver will not be able to decrypt data, interception of communication is detected.

On average, eavesdropper which is trying to intercept photons will use wrong filter polarization in half of the cases. By doing this, state of those photons will be changed making errors in the raw key exchange by the emitter and receiver.

It is basically the same thing which happens if receiver uses wrong filter while trying to read photon polarization or when the same wrong filter is used by an eavesdropper.

In both cases, to prove the integrity of the key, it is enough that sender and receiver are checking for the errors in the sequence or raw key exchange.

Some other thing can cause raw key exchange errors, not only eavesdropping. Hardware component issues and imperfections, environmental causes to the quantum channel can also cause photon loss or polarization change. All those errors are categorized as a possible eavesdropper detection and are filtered out in key sifting. To be sure how much information eavesdropper could have gathered in the process, key distillation is used.

2) Key Distillation

When we have a sifted key, to remove errors and information that an eavesdropper could have gained, sifted key must be processed again. The key after key distillation will be secured enough to be used as secret key.

For example, for all the photons, for which eavesdropper used right polarization filter and for which receiver also used right polarization filter, we do not have a detected communication interception. Here Key Distillation comes into play.

First out of two steps is to correct all possible errors in the key which is done using a classical error correction protocol. In this step we will have an output of error rate which happened. This error rate estimate we can calculate the amount of information the eavesdropper could have about the key.

Second step is privacy amplification which will use compression on the key to squeeze the information of the eavesdropper. The compression factor depends proportionately on the error rate.

Why you shouldn’t ‘be yourself’ at work

‘Be yourself’ is the defining careers advice of the moment. It’s heard everywhere from business leaders in the boardroom to graduation day speeches. It’s so common it’s even a hiring tool for some companies.

One person striving to successfully heed this advice is Michael Friedrich, the Berlin-based vice-president of ScribbleLive, a Canadian software company. For Friedrich, being himself involves wearing shorts to work, and telling prospective clients he’s sleeping on a friend’s living-room floor while he finds a home of his own.

Playing by his own rules has worked well so far, Friedrich says. Thanks to the foreign languages, and well-honed intercultural skills picked up while travelling instead of going to university, he’s landed well-paying jobs. And, despite his unconventional behaviour at ScribbleLive, he’s won a major promotion.

(Credit: ScribbleLive London)

Michael Friedrich bids farewell to his London colleagues before embarking on an 800-mile bicycle ride to Berlin, Germany (Credit: ScribbleLive London)

“I don’t worry about image in the traditional sense. I am the way I am,” says the 44-year-old. “I accept what I’m like and I celebrate it.”

But is ‘be yourself’ good advice for everyone? Just how much of yourself should you reveal to your colleagues? And, are some of us more suited to this ethos than others?

Blurred boundaries 

‘Being yourself’ can backfire in certain circumstances, says Professor Herminia Ibarra, an expert in organisational behaviour and leadership at London Business School and Insead in France.

For instance, her research suggests that people who have been promoted are at risk of failing in their new role if they have a fixed idea of their own ‘authentic’ personality. Rather than adapting their behaviour to fit their changed status, they carry on exactly as before. For instance, someone who sees themselves as open and friendly may share too much of their thoughts and feelings, thus losing credibility and effectiveness, she explains.

(Credit: Benedict Johnson)

Just been promoted to manager? Professor Herminia Ibarra says it’s not always wise to carry on behaving the same way (Credit: Benedict Johnson)

“A very simple definition [of authenticity] is being true to self,” says Ibarra. “But self could be who I am today, who I’ve always been or who I might be tomorrow.”


People can use authenticity as an excuse for staying in their comfort zone, says Ibarra. Faced with change, “oftentimes they say ‘that’s not me’ and they use the idea of authenticity to not stretch and grow”.

People can use authenticity as an excuse for staying in their comfort zone

The ease with which you adapt your behaviour to fit new situations depends to what degree you’re a ‘chameleon’ or a ‘true-to-selfer’, according to Mark Snyder, a social psychologist at the University of Minnesota. He created a personality test to measure this, called the Self-Monitoring Scale.

Chameleons treat their lives as an opportunity to play a series of roles, carefully choosing their words and deeds to convey just the right impression, says Snyder. In contrast, true-to-selfers use their social dealings with others to convey an unfiltered sense of their personalities, he says.

(Credit: Getty Images)

‘Chameleons’ may change their tune to suit whoever’s in the room – but they are more likely to get ahead, says Mark Snyder (Credit: Getty Images)

The problem with ‘be yourself’ as careers advice is that chameleons have a bit of an edge, says Snyder. That’s because a lot of jobs, particularly ones that are at higher levels in corporations, call for acting and self-presentational skills that favour people who change their deeds to fit the situation.

Earning your stripes

Other research suggests it’s only as you progress up the career ladder that you have the licence, power and opportunity to be authentic. It takes time to earn what sociologists call “idiosyncrasy credits”.

“Senior people have tried, experimented, trial-and-errored different versions of self, found whatever works for them, and consolidated a style,” says Ibarra. “They advise students and junior staff to ‘be yourself’ with good intent, forgetting that it’s been a 30-year process.”

It’s not bad advice. It’s just not particularly useful advice

Part of the danger in simply telling people to ‘be yourself’ is that they might think that’s all they need to do, says Jeremiah Stone, a New York-based recruitment specialist at Hudson RPO.

(Credit: Getty Images)

‘Being yourself’ can only get you so far – you’ve got to be able to back it up (Credit: Getty Images)

“It doesn’t mean that you go into an interview or a workplace environment and you behave in the same way you would with your mates. It means that you are engaging authentically with other people, that they get a sense of who you are and what’s important to you and what your values are,” he says. “It’s not bad advice. It’s just not particularly useful advice”

Even Friedrich is unconvinced by ‘be yourself’ as words of wisdom – particularly for younger people. “The advice ‘be yourself’ – that’s starting in the middle. How can you be yourself if you don’t know yourself?” he says. “Get to know yourself and find out what makes you happy.”

To comment on this story or anything else you have seen on BBC Capital, please head over to our Facebook page or message us on Twitter.

PJSIP: Automatic Switch Transport type from UDP to TCP

We encounterred with SIP signaling commands lost issues recently in different terminals, environments, and scenarios.
And we were using UDP as our prior transport type.
The potential cause could be:
1. There were SIP commands which could larger than MTU size.
2. The send/recv queue buffer size of the socket handle was not enough.
3. SIP command(Conference control) were really tremendous

There are some informations about this issue, also could be a way out of this.

According to ​RFC 3261 section 18.1.1:
“If a request is within 200 bytes of the path MTU, or if it is larger than 1300 bytes and the path MTU is unknown, the request MUST be sent using an RFC 2914 congestion controlled transport protocol, such as TCP.”

if Request is Larger than 1300 bytes.

By this rule, PJSIP will automatically send the request with TCP if the request is larger than 1300 bytes. This feature was first implemented in ticket #831. The switching is done on request by request basis, i.e. if an initial INVITE is originally meant to use UDP but end up being sent with TCP because of this rule, then only that initial INVITE is sent with TCP; subsequent requests will use UDP, unless of course if it’s larger than 1300 bytes. In particular, the Contact header stays the same. Only the Via header is changed to TCP.
It could be the case that the initial INVITE is sent with UDP, and once the request is challenged with 401 or 407, the size grows larger than 1300 bytes due to the addition of Authorization or Proxy-Authorization header. In this case, the request retry will be sent with TCP.
In case TCP transport is not instantiated, you will see error similar to this:
“Temporary failure in sending Request msg INVITE/cseq=15228 (tdta02EB0530), will try next server. Err=171060 (Unsupported transport (PJSIP_EUNSUPTRANSPORT))
As the error says, the error is not permanent, as PJSIP will send the request anyway with UDP.
This TCP switching feature can be disabled as follows:
● at run-time by setting pjsip_cfg()->endpt.disable_tcp_switch to PJ_TRUE.
● at-compile time by setting PJSIP_DONT_SWITCH_TO_TCP to non-zero
You can also tweak the 1300 threshold by setting PJSIP_UDP_SIZE_THRESHOLD to the appropriate value.

Goodbye, big aunt

Dec 31, 2016, cloudless clean day, when everybody was saying goodby to each other, my wife’s big aunt from her father went away(my father in law’s big brother’s wife).
She was suffered by a kind of lung tumor, critical type, but went away peacefully.
She was thin and short, a typical old Chinese woman, a symbol of old day’s woman in 1930’s.
She was an educated woman comparing to most of women in her age who was born at that era of China.
In benefit with her education, she was brilliant, deep minded, you can feel it when she talked to you.
After so many years of hard working, now she finally can rest in the heaven, and now she can be together with big uncle again, live together with each other.
I believe Bodhisattva will bless them like Bodhisattva does.

Goodbye, my big uncle

New year, new starts, new beginnings

So manny things happened in 2016, good things, bad things, happy things, sad things, and I was busy in my own so called “The most comfortable state of mind”, but I was cheating myself actually.

Now, 2017 is approaching, new year brings with it new opportunities and excuses to make new starts.

So, like everyone, I can pursue my new beginning now. Isn’t it amazing? XD

Yea, a fresh new start, start deep from the heart, inside out.

Looking forward to you, my 2017.

And wish you great year too, my friends.

The Decline of the Standards-Based Codec—and Good Riddance

Saw this post on Streaming Media Magazine, found we share a same opinion with HEVC, so forward it to my blog.

Online is different from broadcast and doesn’t need formal standards. HEVC isn’t considered by many online video streamers, as the future belongs to VP9 and AV1.

Elsewhere in the issue, you find a 4,000-word article I wrote on VP9 that doesn’t mention HEVC. Why? Because for the vast majority of streaming producers that don’t distribute 4K video to smart TVs, the codec decision isn’t VP9 versus HEVC. It’s H.264 versus VP9, and HEVC isn’t really in the picture.

This dynamic highlights the reality that standards-based codecs are declining in importance, particularly in the streaming space. The success of H.264, first with Flash and later with HTML5, merely masked this trend. That is, H.264 was wildly successful in streaming (and later HTML5) because Adobe selected it for Flash, not because it was a technology standard. This is a subtle but critical distinction. It’s also a very significant sea change.

My first job in the codec world involved marketing a proprietary fractal-based codec for use on CD-ROMs. Our biggest competition came from codecs such as Indeo and Cinepak, and from an emerging standard called MPEG-1. My company never got traction, and (according to ancient memory) the three companies that sold MPEG-1 codecs were all purchased for more than $40 million. The lesson burned into my brain was that standard-based codecs always win.

In this regard, there was never any question that MPEG-2 would be the codec for DVD and early cable and satellite systems. The next standard, H.264, was deployed in satellite and cable and all the associated STBs, and later in mobile devices and retail OTT devices such as Roku and Apple TV. H.264 was the best performing codec around, and by the time VP8 arrived, H.264 was impossibly entrenched. Plus, with a reasonable cap of about $5 million per year (back in 2010, now $8.125 million for 2016), H.264 royalties were affordable, ensuring ubiquitous playback.

Fast-forward to 2016. H.264 is still everywhere, but it’s showing its age. VP9 provides the same quality at 50 percent to 60 percent of the bandwidth, and playback is free in the current versions of all browsers except for Internet Explorer and Safari. The Alliance for Open Media launched in September 2015, consolidating the development of three open source codecs into one engineering group. Google, Mozilla, and Microsoft are founding members, ensuring fast browser support for the first codec (called AV1), which should ship by March 2017. Members Netflix, Amazon, and Google (YouTube) will ensure fast deployment by large web publishers, while members ARM, AMD, Intel, and NVIDIA presage prompt support in hardware.

AV1 is free, while HEVC costs up to $1.20 (or more) per unit with a cap of up to $65 million, and that’s just for the two (of potentially four or more) IP owners with announced terms. With VP9 and AV1 freely available, there is no need for HEVC to deliver to computers and notebooks, and there is no business case (or realistic business model) for licensing HEVC in a browser.

The mobile device landscape is less clear. Apple included HEVC in FaceTime but removed any mention of the technology from its spec sheets after the second HEVC patent group formed. This ensures that Apple will pay far more in HEVC royalties than it will ever receive, making a strong business case for deploying AV1. Android 5.0 includes HEVC software decoder, with hooks to HEVC hardware decoder. However, both royalties are paid by Android licensees, not Google, which is clearly banking on AV1 for the future of YouTube.

Broadcast infrastructures, set-top boxes (STBs), and smart TVs will remain HEVC for a while. But with YouTube choosing VP9/AV1 for its UHD videos and Netflix, Amazon, Microsoft, and the hardware vendors behind AV1, support for the alliance codec in future smart TVs and STBs is assured. HEVC certainly won’t be the only technology these devices support.

The bottom line is that broadcast, with its hundreds of disparate publishers and suppliers, needs a formal standard. The streaming world just needs a reliable, well-supported technology, so a de facto standard set by a group of technology leaders and users is just as good. In fact, it’s better, if you consider the price tag.

This article originally ran in the Autumn 2016 European edition of Streaming Media magazine as “The Decline of the Standards-Based Codec.”

Little angel falled to our home

August 20, a cloudless sunny summer day, PM2.5 35, a very clear and clean day considering we are living in China, in Shanghai.

7 am of that bright day, a little angel came to our life. and, today, we named her 韦曦.

曦 means dawn in Chinese, and because her was born at early morning.
韦曦 pronounces same with WISH in English, and also pronounces same with the vocabulary of MANY in my hometown.

So, yes, we do have MANY WISHes for her.
We wish her healthy, we wish her happy, we wish her …

An issue when collaborating with HUAWEI VP9650 with H.460

TE40 caller :, E.164: 02510000
TE40 caller :,  E.164: 02510000
H600 callee :,  E.164: 654320

Pcap file was captured on H600 side.

All exchanged signaling commands between H600 and VP9650:
…Twenty seconds later…
–>ReleaseComplete, DRQ

(h225 or h245) and ((ip.dst eq and ip.src eq or (ip.src eq and ip.dst eq

H600 after received TCS from VP9650 did not response with any further commands, and led to a ReleaseComplete of VP9650.

Trouble shooting:
Checked the facility commands of VP9650, found that its Q.931 crv value was 0, but with a facility reason of 5(startH245)

HUAWEI format of facility msg of H460 startH245
But we did not support that kind of rules.
Checked the ITU/T documents, found out it’s a standard procedure.

You know what should be done.

Android: dlopen fail due to has text relocations issue

For some reason, I found out some Apps I programmed several years ago, rebuilt them and put on my MI NOTE (Android 6.0) to run some tests.

Here is my cross compile environment:

  • NDK: former downloaded, r7c + r8b
  • SDK: newly downloaded, 24.4.1

But when I tried to run the App on my phone, I got an error like this:

02-15 14:42:58.540: I/OpenGLRenderer(3260): Initialized EGL, version 1.4
02-15 14:42:58.699: W/InputMethodManager(3260): Ignoring onBind: cur seq=164, given seq=163
02-15 14:43:06.718: I/Timeline(3260): Timeline: Activity_launch_request time:6144239
02-15 14:43:06.877: E/linker(3260): /data/app/rg4.net.onvifplayer-1/lib/arm/libffmpeg.so: has text relocations
02-15 14:43:06.897: D/FFMpeg(3260): Couldn't load lib: ffmpeg - dlopen failed: /data/app/rg4.net.onvifplayer-1/lib/arm/libffmpeg.so: has text relocations
02-15 14:43:06.905: E/linker(3260): /data/app/rg4.net.onvifplayer-1/lib/arm/libezgl.so: has text relocations
02-15 14:43:06.910: D/FFMpeg(3260): Couldn't load lib: ezgl - dlopen failed: /data/app/rg4.net.onvifplayer-1/lib/arm/libezgl.so: has text relocations
02-15 14:43:06.920: E/linker(3260): /data/app/rg4.net.onvifplayer-1/lib/arm/libeasyonvif.so: has text relocations
02-15 14:43:06.927: D/FFMpeg(3260): Couldn't load lib: easyonvif - dlopen failed: /data/app/rg4.net.onvifplayer-1/lib/arm/libeasyonvif.so: has text relocations
02-15 14:43:06.927: W/System.err(3260): rg4.net.onvifplayer.RSException: Couldn't load native libs
02-15 14:43:06.927: W/System.err(3260):     at rg4.net.onvifplayer.libEasyRTSP.<init>(libEasyRTSP.java:40)
02-15 14:43:06.927: W/System.err(3260):     at rg4.net.onvifplayer.PlayerActivity.<init>(PlayerActivity.java:33)
02-15 14:43:06.927: W/System.err(3260):     at java.lang.Class.newInstance(Native Method)
02-15 14:43:06.927: W/System.err(3260):     at android.app.Instrumentation.newActivity(Instrumentation.java:1068)
02-15 14:43:06.927: W/System.err(3260):     at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2322)
02-15 14:43:06.927: W/System.err(3260):     at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2481)
02-15 14:43:06.928: W/System.err(3260):     at android.app.ActivityThread.access$900(ActivityThread.java:153)
02-15 14:43:06.928: W/System.err(3260):     at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1349)
02-15 14:43:06.928: W/System.err(3260):     at android.os.Handler.dispatchMessage(Handler.java:102)
02-15 14:43:06.928: W/System.err(3260):     at android.os.Looper.loop(Looper.java:148)
02-15 14:43:06.928: W/System.err(3260):     at android.app.ActivityThread.main(ActivityThread.java:5432)
02-15 14:43:06.928: W/System.err(3260):     at java.lang.reflect.Method.invoke(Native Method)
02-15 14:43:06.928: W/System.err(3260):     at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:735)
02-15 14:43:06.928: W/System.err(3260):     at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:625)
02-15 14:43:06.956: E/linker(3260): /data/app/rg4.net.onvifplayer-1/lib/arm/libffmpeg.so: has text relocations
02-15 14:43:06.962: D/FFMpeg(3260): Couldn't load lib: ffmpeg - dlopen failed: /data/app/rg4.net.onvifplayer-1/lib/arm/libffmpeg.so: has text relocations
02-15 14:43:06.968: E/linker(3260): /data/app/rg4.net.onvifplayer-1/lib/arm/libezgl.so: has text relocations
02-15 14:43:06.974: D/FFMpeg(3260): Couldn't load lib: ezgl - dlopen failed: /data/app/rg4.net.onvifplayer-1/lib/arm/libezgl.so: has text relocations
02-15 14:43:06.985: E/linker(3260): /data/app/rg4.net.onvifplayer-1/lib/arm/libeasyonvif.so: has text relocations
02-15 14:43:06.991: D/FFMpeg(3260): Couldn't load lib: easyonvif - dlopen failed: /data/app/rg4.net.onvifplayer-1/lib/arm/libeasyonvif.so: has text relocations
02-15 14:43:06.991: W/System.err(3260): rg4.net.onvifplayer.RSException: Couldn't load native libs
02-15 14:43:06.991: W/System.err(3260):     at rg4.net.onvifplayer.libEasyRTSP.<init>(libEasyRTSP.java:40)
02-15 14:43:06.991: W/System.err(3260):     at rg4.net.onvifplayer.PlayerActivity.onCreate(PlayerActivity.java:65)
02-15 14:43:06.992: W/System.err(3260):     at android.app.Activity.performCreate(Activity.java:6303)
02-15 14:43:06.992: W/System.err(3260):     at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1108)
02-15 14:43:06.992: W/System.err(3260):     at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2374)
02-15 14:43:06.992: W/System.err(3260):     at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2481)
02-15 14:43:06.992: W/System.err(3260):     at android.app.ActivityThread.access$900(ActivityThread.java:153)
02-15 14:43:06.992: W/System.err(3260):     at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1349)
02-15 14:43:06.992: W/System.err(3260):     at android.os.Handler.dispatchMessage(Handler.java:102)
02-15 14:43:06.992: W/System.err(3260):     at android.os.Looper.loop(Looper.java:148)
02-15 14:43:06.992: W/System.err(3260):     at android.app.ActivityThread.main(ActivityThread.java:5432)
02-15 14:43:06.992: W/System.err(3260):     at java.lang.reflect.Method.invoke(Native Method)
02-15 14:43:06.992: W/System.err(3260):     at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:735)
02-15 14:43:06.992: W/System.err(3260):     at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:625)
02-15 14:43:06.994: E/art(3260): No implementation found for int rg4.net.onvifplayer.libEasyRTSP.NewInstance() (tried Java_rg4_net_onvifplayer_libEasyRTSP_NewInstance and Java_rg4_net_onvifplayer_libEasyRTSP_NewInstance__)

Solutions 1:

This issue could be solved by checking the targetSDKVersion in the manifest file.

Using “22” and not “23” as targetSDKVersion solved it. (See below)

        android:targetSdkVersion="22" />

I also checked the build.gradle files for compile version and targetSDKversion:

compileSdkVersion 22
    buildToolsVersion '22.0.1'

    defaultConfig {
        minSdkVersion 15
        targetSdkVersion 22

Solutions 2:

It was caused by the ffmpeg, and it could also be solved by patching the latest ffmpeg code


I took the latest from https://github.com/FFmpeg/FFmpeg

You will also need HAVE_SECTION_DATA_REL_RO declared somewhere in your build for the macro in asm.S to use the dynamic relocations option.

Further informations:

Previous versions of Android would warn if asked to load a shared library with text relocations:

“libfoo.so has text relocations. This is wasting memory and prevents security hardening. Please fix.”.

Despite this, the OS will load the library anyway. Marshmallow rejects library if your app’s target SDK version is >= 23. System no longer logs this because it assumes that your app will log the dlopen(3) failure itself, and include the text from dlerror(3) which does explain the problem. Unfortunately, lots of apps seem to catch and hide the UnsatisfiedLinkError throw by System.loadLibrary in this case, often leaving no clue that the library failed to load until you try to invoke one of your native methods and the VM complains that it’s not present.

You can use the command-line scanelf tool to check for text relocations. You can find advice on the subject on the internet; for example https://wiki.gentoo.org/wiki/Hardened/Textrels_Guide is a useful guide.

And you can check if your shared lbirary has text relocations by doing this:

readelf -a path/to/yourlib.so | grep TEXTREL

If it has text relocations, it will show you something like this:

0x00000016 (TEXTREL)                    0x0

If this is the case, you may recompile your shared library with the latest NDK version available:

ndk-build -B -j 8

And if you check it again, the grep command will return nothing.

Does ZTE T800 and HUAWEI TEx0 support T.140?

Both ZTE T800 and HUAWEI TEx0 claim to have T.140 supported, but when I digging into these entities by running some tests between T800, TE40 and TE60, my current status is I’m not persuaded.

Maybe only because I don’t know how to configure them to make T.140 enabled.

Here is some T.140 related information, and my steps to analysis to the protocols of HUAWEI TEx0 and ZTE T800.

A screen shot of HUAWEI TEx0’s administration manual about T.140.



1. T.140 related standard documents





6)RFC4103 – RTP Payload for Text Conversation.pdf

2. Major descriptions of implementing T.140

T.140 related descriptions in T-REC-H.323-200002-S!AnnG!PDF-E.

1) H.245 TCS for T.140

In the capabilities exchange, when using a reliable channel, specify:

DataApplicationCapability.application = t140
DataProtocolCapability = tcp

In the capabilities exchange, when using an unreliable channel, specify:

DataApplicationCapability.application = t140
DataProtocolCapability = udp

2) H.245 Open Logical Channel

In the Open Logical Channel procedure, specify:

OpenLogicalChannel.forwardLogicalChannelParameters = dataType
DataType = data

And select a reliable or unreliable channel for the transfer of T.140 data by specifying the DataApplicationCapability and the DataProtocolCapability as above.

According to the description in T-REC-H.224-200501-I!!PDF-E, there should be only one H.221 channel, we can still send multiple protocols, like FECC, T.120 and T.140, in one single channel, this type of channel has a name: H.221 MLP data channel.

3) Packetization of T.140 data

Reliable TCP mode: skipped because don’t find any newlly established TCP connections.

UnReliable mode: I do find an H.224 capability in both of these entities, since there is no OLC requests other than Audio, Video, and H.224 data.

Let’s suppose they are re-using the H.221 MLP data channel for both FECC and T.140 transmission.

4) H.224 protocol octet structure

H.224 protocol octet structure

5) H.224 -Standard Client ID Table

H.224 -Standard Client ID Table

3. H.224 data packets sending between TE60 and T800

I managed to extract the H.224 data packets from the PCAP file.

And they are like these:

7e 7e 7e 00 86 c0 00 00 00 00 00 03 40 00 81 a8 e8 0f b2 07 db 07 9f 9f 9f bf ff

Explain the packet by the standard document’s description:




7e 7e 7e Flag Flag
00 Upper DLCI Q.922 Address Header
86 Lower DLCI, 0x6 or 0x7 + EA
C0 UI Mode Q.922 Control Octet(s)
00 Upper Destination Terminal address Data Link Header
00 Lower Destination Terminal address
00 Upper Source Terminal address
00 Upper Source Terminal address
00 Standard Client ID
03 ES + BS
40 00 81 a8 e8 0f b2 07 db 07 9f 9f 9f bf ff Client data octet Client data octet

Comparing the extracted Standard Client ID with H.224 Standard Client ID Table, we can make out a conclusion for this packet: it’s a CME packet, not a T.140 packet.

Now, since we know how to identify the data type of H.224 data packets, we can judge all the H.224 data packet between TE60 and T800.

TE60 –> T800

7e 7e 7e 00 86 c0 00 00 00 00 00 03 40 00 81 a8 e8 0f b2 07 db 07 9f 9f 9f bf ff

7e 7e 7e 00 86 c0 00 00 00 00 00 03 80 00 80 81 12 c8 7e 7e 7e ff

7e 7e 7e 00 86 c0 00 00 00 00 00 03 40 00 81 a8 e8 0f b2 07 db 07 9f 9f 9f bf ff

7e 7e 7e 00 86 c0 00 00 00 00 00 03 40 00 81 a8 e8 0f b2 07 db 07 9f 9f 9f bf ff

7e 7e 7e 00 86 c0 00 00 00 00 00 03 40 fb c0 c8 a8 bf 3f 3f 7f ff

7e 7e 7e 00 86 c0 00 00 00 00 00 03 40 fb c0 c8 a8 bf 3f 3f 7f ff

T800 –> TE60

7e 7e 7e 00 8e c0 00 00 00 00 00 03 80 00 40 81 f7 00 00 5a 00 00 4c 50 3f 3f 3f 3f 3f 3f 16

7e 7e 7e 00 8e c0 00 00 00 00 00 03 40 00 81 68 a8 0f 92 07 cb 00 28 80 3d f1 ef cf cf cf cf cf cd

7e 7e 7e 00 8e c0 00 00 00 00 80 03 a0 08 0e 45 7e 7e 7e 7e 7e 7e


Among the listed packets, there’s only one packet not a CME packet, which Standard Client ID is 0x80.

According to T-REC-H.323-200002-S!AnnG!PDF-E.pdf, we should reverse the octet value by bit to get the real value, and the reversed real value would be 0x01, after check with the Standard Client ID, we know it’s a FECC packet, still not T.140.


God, I lost. Anyone can tell me how to get T.140 work on ZTE T800 and HUAWEI TE60?

An example of AAC capability in H.245

Got mails continuously from everywhere throwing question to me about AAC audio in H.323.

So I arranged this post to example my previous posts: http://rg4.net/archives/1480.htmlhttp://rg4.net/archives/1126.htmlhttp://rg4.net/archives/1112.html

The pcap file for this example can be downloaded here: HUAWEI_TE600-vs-ZTE_T800.pcapnp

Here it is.

1. Basic knowledge: AAC LD descriptions in 14496-3

It operates at up to 48 kHz sampling rate and uses a frame length of 512 or 480 samples, compared to the 1024 or 960 samples used in standard MPEG-2/4 AAC to enable coding of general audio signals with an algorithmic delay not exceeding 20 ms. Also the size of the window used in the analysis and synthesis filterbank is reduced by a factor of 2.

And Table 1.3 — Audio Profiles definition of 14496-3 explained AAC format definition, AAC LC or AAC LD.

2. Basic knowledge: AAC capability in description of H.245 TCS

maxBitRate: 640
ProfileAndLevel: nonCollapsing item –> parameterIdentifier: standard = 0
AAC format: nonCollapsing item –> parameterIdentifier: standard = 1
AudioObjectType: nonCollapsing item –> parameterIdentifier: standard = 3
Config(Including sample rate and channel parameters): nonCollapsing item –> parameterIdentifier: standard = 4
MuxConfig: nonCollapsing item –> parameterIdentifier: standard = 8

3. H.245 TCS of HUAWEI TE60 and ZTE T800

There are two AAC capabilities:
Capability 1:
collapsing item –> parameterIdentifier=2, parameterValue=2
collapsing item –> parameterIdentifier=5, parameterValue=1
ProfileAndLevel: 24
AAC format: logical (0)
AudioObjectType: 23

Capability 2:
collapsing item –> parameterIdentifier=2, parameterValue=2
collapsing item –> parameterIdentifier=5, parameterValue=1
ProfileAndLevel: 24
AudioObjectType: 23

ZTE T800:
There are four AAC capabilities:
Capability 1:
Capability 2:
Capability 3:
Capability 4:

4. Detail parameters in OLC command

TE60 OLC to T800:
maxBitRate: 1280
item 0 –> parameterIdentifier=2, parameterValue=2
item 1 –> parameterIdentifier=5, parameterValue=1
item 0 –> parameterIdentifier=0, value=25
item 1 –> parameterIdentifier=1, value=logical (0)
item 2 –> parameterIdentifier=3, value=23
item 3 –> parameterIdentifier=6, value=logical (0)
item 4 –> parameterIdentifier=8, octetString = 41 01 73 2a 00 11 00
item 5 –> parameterIdentifier=9, octetString = 00 00 00

AOT=23 –> AAC LD
MuxConfig = 41 01 73 2a 00 11 00 –> LATM format
Sample rate = (MuxConfig[2]&0x0f) = 0x73 & 0x0f = 3 = 48K Hz
Channel = (MuxConfig[3]&0xf0)>>4 = (0x2a & 0xf0) >> 4 = 0x20 >> 4 = 2 = Stereo

HUAWEI sent open logical channel with AAC LD stereo to ZTE.

T800 OLC to TE60:
maxBitRate: 1280
item 0 –> parameterIdentifier=2, parameterValue=2
item 1 –> parameterIdentifier=5, parameterValue=1
item 0 –> parameterIdentifier=0, value=25
item 1 –> parameterIdentifier=1, value=logical (0)
item 2 –> parameterIdentifier=3, value=23
item 3 –> parameterIdentifier=6, value=logical (0)
item 4 –> parameterIdentifier=8, octetString = 41 01 73 1a 00 11 00
item 5 –> parameterIdentifier=9, octetString = 00 00 00

AOT=23 –> AAC LD
MuxConfig = 41 01 73 1a 00 11 00 –> LATM format
Sample rate = (MuxConfig[2]&0x0f) = 0x73 & 0x0f = 3 = 48K Hz
Channel = (MuxConfig[3]&0xf0)>>4 = (0x1a & 0xf0) >> 4 = 0x10 >> 4 = 1 = Mono

ZTE sent open logical channel with AAC LD mono to HUAWEI.
Any furture questions?

How to Change the Buffer on VLC

The VLC media player includes file cache and stream buffer options to enable fine-grained control over video playback on machines with limited system resources. If you use VLC to stream network video, you can set the buffer size on a per-stream or permanent basis. For local file playback, you can raise or lower the file cache size to limit the amount of memory VLC uses or the frequency with which it accesses the disk. For systems with low memory, a low cache setting makes more resources available to the operating system.

Permanently Change the Streaming Buffer

  • Click “Tools” and select “Preferences.” In the lower left of the Preferences dialog, select the “All” button under “Show Settings” to display the advanced settings.
  • Select “Stream Output” from the sidebar menu. The setting that affects buffer size is labeled “Stream Output Muxer Caching.”
  • Enter a new amount in milliseconds in the Muxer Caching field. Since this setting requires a value in milliseconds, the amount of memory it uses varies with the streaming video’s quality. If you have ample RAM but a slow network connection, a high setting such as 2,000 ms to 3,000 ms is safe. You may need to experiment to find the right setting for your machine.

Change the Buffer for Individual Streams

  • Press “Ctrl-N” to open a new network stream, then enter a URL in the address field. VLC supports HTTP, FTP, MMS, UDP and RTSP protocols, and you must enter the full URL in the address field.
  • Select “Show More Options” to display advanced settings for the current network stream. The Caching option controls the streaming buffer size.
  • Enter an amount in milliseconds in the Caching field, then click “Play.” Depending on the cache setting, the video may take a few seconds to start streaming.

Change the Buffer for Local Files

  • Click “Tools” and select “Preferences.” In the lower left of the Preferences dialog, select the “All” button under “Show Settings” to display the advanced settings.
  • Select “Input / Codecs” from the sidebar menu, then scroll to the Advanced section in the Input / Codecs panel.
  • Enter a new amount in the File Caching field. The default setting is 300 ms, which results in VLC accessing your disk three times per second. If video playback stutters on your machine, increasing this setting can make it smoother. However, depending on your RAM and CPU resources, you may need to experiment to find the right setting.

Tips & Warnings

  • Information in this article applies to VLC 2.1.5. It may vary slightly or significantly with other versions.

Source: http://www.ehow.com/how_8454118_change-buffer-vlc.html

Vendor ID, Product ID information in SIP

As you may know, to be a robust meeting entity, we must take good care of compatibility requirements for different facilities from different manufacturers.

In H.323 protocol, we can use fields like Vendor ID, Product ID, Version ID in the signaling commands.

But how to do this when you are using SIP protocol?

  1. Definitions in RFC 3261

20.35 Server

   The Server header field contains information about the software used

   by the UAS to handle the request.

   Revealing the specific software version of the server might allow the

   server to become more vulnerable to attacks against software that is

   known to contain security holes. Implementers SHOULD make the Server

   header field a configurable option.


      Server: HomeServer v2

20.41 User-Agent

   The User-Agent header field contains information about the UAC

   originating the request.  The semantics of this header field are

   defined in [H14.43].

   Revealing the specific software version of the user agent might allow

   the user agent to become more vulnerable to attacks against software

   that is known to contain security holes.  Implementers SHOULD make

   the User-Agent header field a configurable option.


      User-Agent: Softphone Beta1.5



  1. [H14.43] User-Agent definition in RFC2616

14.43 User-Agent

The User-Agent request-header field contains information about the user agent originating the request. This is for statistical purposes, the tracing of protocol violations, and automated recognition of user agents for the sake of tailoring responses to avoid particular user agent limitations. User agents SHOULD include this field with requests.

The field can contain multiple product tokens (section 3.8) and comments identifying the agent and any subproducts which form a significant part of the user agent. By convention, the product tokens are listed in order of their significance for identifying the application.

User-Agent     = “User-Agent” “:” 1*( product | comment )


User-Agent: CERN-LineMode/2.15 libwww/2.17b3



  1. How TANDBERG and Polycom implemented?

User-Agent format of TANDBERG 775
Server format of TANDBERG 775


User-Agent format of Polycom

So, jump to the conclusion:

  1. As UAC, identify yourself in User-Agent field.
  2. As UAS, identify yourself in Server field.

Comparing with TANDBERG and POLYCOM’s implementation, TANDBERG format is more proper.

Common video communication protocol intro – GB28181

常见视频通信协议介绍 – GB28181
It’s not a world-wide standard, but a Chinese-marketing-only standard, which was drafted by a number of Chinese government facilities.

P.S. It’s a Chinese version of presentation.

Video-communication-protocols-GB28181.ppt[gview file=”Video-communication-protocols-GB28181.pdf” save=”0″]