QKD – How Quantum Cryptography Key Distribution Works

Forwarded from: https://howdoesinternetwork.com/2016/quantum-key-distribution


QKD – Quantum key distribution is the magic part of quantum cryptography. Every other part of this new cryptography mechanism remains the same as in standard cryptography techniques currently used.

By using quantum particles which behave under rules of quantum mechanics, keys can be generated and distributed to receiver side in completely safe way. Quantum mechanics principle, which describes the base rule protecting the exchange of keys, is Heisenberg’s Uncertainty Principle.

Heisenberg’s Uncertainty Principle states that it is impossible to measure both speed and current position of quantum particles at the same time. It furthermore states that the state of observed particle will change if and when measured. This fairly negative axiom which says that measurement couldn’t be done without perturbing the system is used in positive way by quantum key distribution.

It a real communication system, if somebody tries to intercept photon-powered communication so that it can get the crypto key which is being generated by this photon transfer, it will need to squeeze transferred photons through its polarization filter to read information encoded on them. As soon as it tries with wrong filter it will send forward the wrong photon. Sender and receiver will notice the disparity in exchanged data and interpret it as detection of interception. They will then restart the process of new crypto key generation.

Photon, and how it is used?

1) Photon – Smallest particle of light is a photon. It has three types of spins: horizontal, vertical and diagonal which can be imagined as right to left polarization.

2) Polarization – Polarization is used to polarize a photon. Polarize the photon means to filter the particle through polarization filter in order to filter out unwanted types of spins. Photon has all three spin states at the same time. We can manipulate the spin of a photon by putting the filter on its path. Photon, when passed through the polarization filter, has particular spin that filter lets through.

3) Spin – The Spin is usually the most complicated property to describe. It is a property of some elementary particle like electron and photon. When they move through a magnetic field, they will be deflected like they have same properties of little magnets.

If we take classical world for example, a charged, spinning object has magnetic properties. Elementary particles like photons or electrons have similar properties. We know that by the rules of quantum mechanics that elementary particles cannot spin. Regardless the inability to spin, physicists named the elementary particle magnetic properties “spin”. It can be a bit misleading but it helps to learn the fact that photon will be deflected by magnetic field. The photon’s spin does not change and it can be manifested in two possible orientations.

4) LED – light emitting diodes are used to create photons in most quantum-optics experiments. LEDs are creating unpolarized (real-world) light.

Modern technology advanced and today it is possible to use LEDs as source of single photon. In this way string of photons is created which will then be used in quantum channel for key generation and distribution in quantum key distribution process between sender and receiver.

Normal optic networking devices use LED light sources which are creating photon bursts instead of individual photons. In quantum cryptography one single photon at a time needs to be sent in order to have the chance to polarize it on the entrance into optic channel and check the polarization on the exit side.

Data Transmission Using Photons

Most technically challenging part of data transmission encoded in individual photon is the technique to read the encoded bit of data out from each photon. How’s possible to read the bit encoded in the photon when the very essence of quantum physics is making the measurement of quantum state impossible without perturbations? There is an exception.

We attach one bit of data to each photon by polarizing each individual photon. Polarizing photons is done by filtering photon through polarization filter. Polarized photon is send across quantum channel towards receiver on other side.

Heisenberg’s Uncertainty Principle come into the experiment with the rule that photon, when polarized, cannot be measured again because the measurement will change its state (ratio between different spins).

Fortunately, there is an exception in Uncertainty Principle which enables the measurement but only in special cases when measurement of the photon spin properties is done with a device (filter in this case) whose quantum state is compatible with measured particle.

In a case when photons vertical spin is measured with diagonal filter, photon will be absorbed by the filter or the filter will change photon’s spin properties. By changing the properties photon will pass through the filter but it will get diagonal spin. In both cases information which was sent from sender is lost on receiver side.

The only way to read photons currently encoded bit/spin is to pass it through the right kind of filter. If polarized with diagonal polarization (X) the only way to read this spin is to pass the photon through diagonal (X) filter. If vertical filter (+) is used in an attempt to read that photon polarization, photon will get absorbed or it will change the spin and get different polarization as it did on the source side.

List of spin that we can produce when different polarization filter is used:

  •   Linear Polarization (+)
  •   Horizontal Spin (–)
  •   Vertical Spin (|)
  •   Diagonal Polarization (X)
  •   Diagonal Spin to the left (\)
  •   Diagonal Spin to the right (/)

Key Generation or Key Distribution

The technique of data transmission using photons in order to generate a secure key at quantum level is usually referred as Quantum Key Distribution process. Sometimes QKD is also wrongly referenced as Quantum Cryptography. QKD is only a part of Quantum Crypto.

Key Distribution/Generation using photon properties like spin is solved by Quantum Key Distribution protocols allowing the exchange of a crypto key with – laws of physics guaranteed – security. When finally generated, key is absolutely secure and can be further used with all sorts of conventional crypto algorithms.

Quantum Key Distribution protocols that are commonly mentioned and mostly in use in today’s implementations are BB84 protocol and SARG protocol.

BB84 is the first one invented and it is still commonly used. It is the first one to be described in the papers like this one which are trying to describe how Quantum key exchange works. SARG was created later as an enhancement which brought a different way of key sifting technique which is described later in this paper.

1) Attaching Information bit on the photon – Key Exchange

Key Exchange phase, sometimes referred as Raw Key Exchange giving the anticipation of future need for Key Sifting is a technique equal for both listed Quantum Key Distribution protocols BB84 and SARG. To be able to transfer numeric (binary) information across quantum channel we need to apply specific encoding to different photon states. For Example, encoding will be applied as in the Table 1 below making different photon spin carry different binary value.


Table 1 – QKD – Encoding of photon states

In the process of key distribution, first step is for sender to apply polarization on sent photons and take a note of applied polarization. For this to be an example, we will take the Table 2 below as the list of sent photons with their polarization information listed.

Table 2 – QKD – Encoded photons

Sender sent binary data:

0 1 0 0 0 0 0 1 0 0 1 1 1 0 0 1 0 0 1 1

If the system will work with integers this data can be formatted in integer format:

Table 3- Binary to Decimal Conversion Table

Sender sent a key 267155 but it is just the start of the key generation process in which this key will be transformed from firstly sent group of bits ( 0 1 0 0 0 0 0 1 0 0 1 1 1 0 0 1 0 0 1 1 ) to the real generated and secured key.

2) Reading Information bits on the receiver side

The question arises on how can we use above described properties of photon and still be able to actually read it on the receiver side. In the above step, photons with the information attached to them were sent to the receiver side.

The next step will describe how quantum key distribution, and with that the whole quantum cryptography, works.

While sending, a list is made, list containing each sent photon, sent from sender to receiver and polarized with specific spin (encoded a bit of information on each photon).

In optimal case, when sender sends a photon with vertical spin and receiver also applies vertical filter in the time of photon arrival, they will successfully transfer a bit of data using quantum particle (photon). In a less optimal case when a photon with vertical spin is measured with diagonal filter the outcome will be photon with diagonal spin or no photon at all. The latter will happen if the photon is absorbed by the filter. In this case, transferred bit of data will later get dumped in the phase of key sifting or key verification.

3) Key Verification – Sifting Key Process

Key sifting phase or Key verification is a technique made differently with listed Quantum Key Distribution protocols BB84 and SARG. In the last section, a less optimal case when a photon with vertical spin is measured with diagonal filter was described. The outcome of that photon, which is sent with vertical spin, measurement done with diagonal spin, will give to the receiver a photon with diagonal spin or no photon at all.

Key verification comes into play now and it is usually referred as Key Sifting process.

In BB84 protocol receiver will communicate with sender and give him the list of applied filters for every received photon. Sender will analyze that list and respond with a shorter list back. That list is made by leaving out the instances where sender and receiver used different filters in single photon transfer.

In SARG protocol receiver will give to sender the list of results he produced from each received photons without sending filter orientation used (difference from BB84). Sender then needs to use that list plus his applied polarization while sending to deduce the orientation of the filter used by receiver. Sender then unveils to the receiver for which transfers he is able to deduce the polarization. Sender and receiver will discard all other cases.

In this whole process, sending of polarized photons is done through special line of optical fiber cable.

If we take BB84 for example, Key sifting process is done by receiver sending to the sender only the list of applied polarization in each photon transfer. Receiver does not send the spin or the value he got as a result from that transfer. Having that in mind, it is clear that communication channel for key verification must not be a quantum channel but rather a normal communication channel with not even the need to have encryption applied. Receiver and sender are exchanging the data that is only locally significant to their process of deducing in which steps they succeeded to send one polarized photon and read the photon one bit of information on the other side.

In the end of Key Sifting process, taking that no eavesdropping happened, both sides will be in possession of exactly the same cryptographic key. The key after sifting process will be half of the original raw key length when BB84 is used or a quarter with SARG. Other bits will be discarded in the sifting process.

Communication Interception – Key Distillation

1) Interception Detection

If a malicious third party wants to intercept the communication between two sides, in order to read the information encoded, he will have to randomly apply polarization on transmitted photons. If polarization is done, this third party will need to forward photons to the original sender. As it is not possible to guess all polarization correctly, when sender and receiver validate the polarization, receiver will not be able to decrypt data, interception of communication is detected.

On average, eavesdropper which is trying to intercept photons will use wrong filter polarization in half of the cases. By doing this, state of those photons will be changed making errors in the raw key exchange by the emitter and receiver.

It is basically the same thing which happens if receiver uses wrong filter while trying to read photon polarization or when the same wrong filter is used by an eavesdropper.

In both cases, to prove the integrity of the key, it is enough that sender and receiver are checking for the errors in the sequence or raw key exchange.

Some other thing can cause raw key exchange errors, not only eavesdropping. Hardware component issues and imperfections, environmental causes to the quantum channel can also cause photon loss or polarization change. All those errors are categorized as a possible eavesdropper detection and are filtered out in key sifting. To be sure how much information eavesdropper could have gathered in the process, key distillation is used.

2) Key Distillation

When we have a sifted key, to remove errors and information that an eavesdropper could have gained, sifted key must be processed again. The key after key distillation will be secured enough to be used as secret key.

For example, for all the photons, for which eavesdropper used right polarization filter and for which receiver also used right polarization filter, we do not have a detected communication interception. Here Key Distillation comes into play.

First out of two steps is to correct all possible errors in the key which is done using a classical error correction protocol. In this step we will have an output of error rate which happened. This error rate estimate we can calculate the amount of information the eavesdropper could have about the key.

Second step is privacy amplification which will use compression on the key to squeeze the information of the eavesdropper. The compression factor depends proportionately on the error rate.

PJSIP: Automatic Switch Transport type from UDP to TCP

We encounterred with SIP signaling commands lost issues recently in different terminals, environments, and scenarios.
And we were using UDP as our prior transport type.
The potential cause could be:
1. There were SIP commands which could larger than MTU size.
2. The send/recv queue buffer size of the socket handle was not enough.
3. SIP command(Conference control) were really tremendous

There are some informations about this issue, also could be a way out of this.

According to ​RFC 3261 section 18.1.1:
“If a request is within 200 bytes of the path MTU, or if it is larger than 1300 bytes and the path MTU is unknown, the request MUST be sent using an RFC 2914 congestion controlled transport protocol, such as TCP.”

if Request is Larger than 1300 bytes.

By this rule, PJSIP will automatically send the request with TCP if the request is larger than 1300 bytes. This feature was first implemented in ticket #831. The switching is done on request by request basis, i.e. if an initial INVITE is originally meant to use UDP but end up being sent with TCP because of this rule, then only that initial INVITE is sent with TCP; subsequent requests will use UDP, unless of course if it’s larger than 1300 bytes. In particular, the Contact header stays the same. Only the Via header is changed to TCP.
It could be the case that the initial INVITE is sent with UDP, and once the request is challenged with 401 or 407, the size grows larger than 1300 bytes due to the addition of Authorization or Proxy-Authorization header. In this case, the request retry will be sent with TCP.
In case TCP transport is not instantiated, you will see error similar to this:
“Temporary failure in sending Request msg INVITE/cseq=15228 (tdta02EB0530), will try next server. Err=171060 (Unsupported transport (PJSIP_EUNSUPTRANSPORT))
As the error says, the error is not permanent, as PJSIP will send the request anyway with UDP.
This TCP switching feature can be disabled as follows:
● at run-time by setting pjsip_cfg()->endpt.disable_tcp_switch to PJ_TRUE.
● at-compile time by setting PJSIP_DONT_SWITCH_TO_TCP to non-zero
You can also tweak the 1300 threshold by setting PJSIP_UDP_SIZE_THRESHOLD to the appropriate value.

An issue when collaborating with HUAWEI VP9650 with H.460

TE40 caller :, E.164: 02510000
TE40 caller :,  E.164: 02510000
H600 callee :,  E.164: 654320

Pcap file was captured on H600 side.

All exchanged signaling commands between H600 and VP9650:
…Twenty seconds later…
–>ReleaseComplete, DRQ

(h225 or h245) and ((ip.dst eq and ip.src eq or (ip.src eq and ip.dst eq

H600 after received TCS from VP9650 did not response with any further commands, and led to a ReleaseComplete of VP9650.

Trouble shooting:
Checked the facility commands of VP9650, found that its Q.931 crv value was 0, but with a facility reason of 5(startH245)

HUAWEI format of facility msg of H460 startH245
But we did not support that kind of rules.
Checked the ITU/T documents, found out it’s a standard procedure.

You know what should be done.

Android: dlopen fail due to has text relocations issue

For some reason, I found out some Apps I programmed several years ago, rebuilt them and put on my MI NOTE (Android 6.0) to run some tests.

Here is my cross compile environment:

  • NDK: former downloaded, r7c + r8b
  • SDK: newly downloaded, 24.4.1

But when I tried to run the App on my phone, I got an error like this:

02-15 14:42:58.540: I/OpenGLRenderer(3260): Initialized EGL, version 1.4
02-15 14:42:58.699: W/InputMethodManager(3260): Ignoring onBind: cur seq=164, given seq=163
02-15 14:43:06.718: I/Timeline(3260): Timeline: Activity_launch_request time:6144239
02-15 14:43:06.877: E/linker(3260): /data/app/rg4.net.onvifplayer-1/lib/arm/libffmpeg.so: has text relocations
02-15 14:43:06.897: D/FFMpeg(3260): Couldn't load lib: ffmpeg - dlopen failed: /data/app/rg4.net.onvifplayer-1/lib/arm/libffmpeg.so: has text relocations
02-15 14:43:06.905: E/linker(3260): /data/app/rg4.net.onvifplayer-1/lib/arm/libezgl.so: has text relocations
02-15 14:43:06.910: D/FFMpeg(3260): Couldn't load lib: ezgl - dlopen failed: /data/app/rg4.net.onvifplayer-1/lib/arm/libezgl.so: has text relocations
02-15 14:43:06.920: E/linker(3260): /data/app/rg4.net.onvifplayer-1/lib/arm/libeasyonvif.so: has text relocations
02-15 14:43:06.927: D/FFMpeg(3260): Couldn't load lib: easyonvif - dlopen failed: /data/app/rg4.net.onvifplayer-1/lib/arm/libeasyonvif.so: has text relocations
02-15 14:43:06.927: W/System.err(3260): rg4.net.onvifplayer.RSException: Couldn't load native libs
02-15 14:43:06.927: W/System.err(3260):     at rg4.net.onvifplayer.libEasyRTSP.<init>(libEasyRTSP.java:40)
02-15 14:43:06.927: W/System.err(3260):     at rg4.net.onvifplayer.PlayerActivity.<init>(PlayerActivity.java:33)
02-15 14:43:06.927: W/System.err(3260):     at java.lang.Class.newInstance(Native Method)
02-15 14:43:06.927: W/System.err(3260):     at android.app.Instrumentation.newActivity(Instrumentation.java:1068)
02-15 14:43:06.927: W/System.err(3260):     at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2322)
02-15 14:43:06.927: W/System.err(3260):     at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2481)
02-15 14:43:06.928: W/System.err(3260):     at android.app.ActivityThread.access$900(ActivityThread.java:153)
02-15 14:43:06.928: W/System.err(3260):     at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1349)
02-15 14:43:06.928: W/System.err(3260):     at android.os.Handler.dispatchMessage(Handler.java:102)
02-15 14:43:06.928: W/System.err(3260):     at android.os.Looper.loop(Looper.java:148)
02-15 14:43:06.928: W/System.err(3260):     at android.app.ActivityThread.main(ActivityThread.java:5432)
02-15 14:43:06.928: W/System.err(3260):     at java.lang.reflect.Method.invoke(Native Method)
02-15 14:43:06.928: W/System.err(3260):     at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:735)
02-15 14:43:06.928: W/System.err(3260):     at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:625)
02-15 14:43:06.956: E/linker(3260): /data/app/rg4.net.onvifplayer-1/lib/arm/libffmpeg.so: has text relocations
02-15 14:43:06.962: D/FFMpeg(3260): Couldn't load lib: ffmpeg - dlopen failed: /data/app/rg4.net.onvifplayer-1/lib/arm/libffmpeg.so: has text relocations
02-15 14:43:06.968: E/linker(3260): /data/app/rg4.net.onvifplayer-1/lib/arm/libezgl.so: has text relocations
02-15 14:43:06.974: D/FFMpeg(3260): Couldn't load lib: ezgl - dlopen failed: /data/app/rg4.net.onvifplayer-1/lib/arm/libezgl.so: has text relocations
02-15 14:43:06.985: E/linker(3260): /data/app/rg4.net.onvifplayer-1/lib/arm/libeasyonvif.so: has text relocations
02-15 14:43:06.991: D/FFMpeg(3260): Couldn't load lib: easyonvif - dlopen failed: /data/app/rg4.net.onvifplayer-1/lib/arm/libeasyonvif.so: has text relocations
02-15 14:43:06.991: W/System.err(3260): rg4.net.onvifplayer.RSException: Couldn't load native libs
02-15 14:43:06.991: W/System.err(3260):     at rg4.net.onvifplayer.libEasyRTSP.<init>(libEasyRTSP.java:40)
02-15 14:43:06.991: W/System.err(3260):     at rg4.net.onvifplayer.PlayerActivity.onCreate(PlayerActivity.java:65)
02-15 14:43:06.992: W/System.err(3260):     at android.app.Activity.performCreate(Activity.java:6303)
02-15 14:43:06.992: W/System.err(3260):     at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1108)
02-15 14:43:06.992: W/System.err(3260):     at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2374)
02-15 14:43:06.992: W/System.err(3260):     at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2481)
02-15 14:43:06.992: W/System.err(3260):     at android.app.ActivityThread.access$900(ActivityThread.java:153)
02-15 14:43:06.992: W/System.err(3260):     at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1349)
02-15 14:43:06.992: W/System.err(3260):     at android.os.Handler.dispatchMessage(Handler.java:102)
02-15 14:43:06.992: W/System.err(3260):     at android.os.Looper.loop(Looper.java:148)
02-15 14:43:06.992: W/System.err(3260):     at android.app.ActivityThread.main(ActivityThread.java:5432)
02-15 14:43:06.992: W/System.err(3260):     at java.lang.reflect.Method.invoke(Native Method)
02-15 14:43:06.992: W/System.err(3260):     at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:735)
02-15 14:43:06.992: W/System.err(3260):     at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:625)
02-15 14:43:06.994: E/art(3260): No implementation found for int rg4.net.onvifplayer.libEasyRTSP.NewInstance() (tried Java_rg4_net_onvifplayer_libEasyRTSP_NewInstance and Java_rg4_net_onvifplayer_libEasyRTSP_NewInstance__)

Solutions 1:

This issue could be solved by checking the targetSDKVersion in the manifest file.

Using “22” and not “23” as targetSDKVersion solved it. (See below)

        android:targetSdkVersion="22" />

I also checked the build.gradle files for compile version and targetSDKversion:

compileSdkVersion 22
    buildToolsVersion '22.0.1'

    defaultConfig {
        minSdkVersion 15
        targetSdkVersion 22

Solutions 2:

It was caused by the ffmpeg, and it could also be solved by patching the latest ffmpeg code


I took the latest from https://github.com/FFmpeg/FFmpeg

You will also need HAVE_SECTION_DATA_REL_RO declared somewhere in your build for the macro in asm.S to use the dynamic relocations option.

Further informations:

Previous versions of Android would warn if asked to load a shared library with text relocations:

“libfoo.so has text relocations. This is wasting memory and prevents security hardening. Please fix.”.

Despite this, the OS will load the library anyway. Marshmallow rejects library if your app’s target SDK version is >= 23. System no longer logs this because it assumes that your app will log the dlopen(3) failure itself, and include the text from dlerror(3) which does explain the problem. Unfortunately, lots of apps seem to catch and hide the UnsatisfiedLinkError throw by System.loadLibrary in this case, often leaving no clue that the library failed to load until you try to invoke one of your native methods and the VM complains that it’s not present.

You can use the command-line scanelf tool to check for text relocations. You can find advice on the subject on the internet; for example https://wiki.gentoo.org/wiki/Hardened/Textrels_Guide is a useful guide.

And you can check if your shared lbirary has text relocations by doing this:

readelf -a path/to/yourlib.so | grep TEXTREL

If it has text relocations, it will show you something like this:

0x00000016 (TEXTREL)                    0x0

If this is the case, you may recompile your shared library with the latest NDK version available:

ndk-build -B -j 8

And if you check it again, the grep command will return nothing.

Does ZTE T800 and HUAWEI TEx0 support T.140?

Both ZTE T800 and HUAWEI TEx0 claim to have T.140 supported, but when I digging into these entities by running some tests between T800, TE40 and TE60, my current status is I’m not persuaded.

Maybe only because I don’t know how to configure them to make T.140 enabled.

Here is some T.140 related information, and my steps to analysis to the protocols of HUAWEI TEx0 and ZTE T800.

A screen shot of HUAWEI TEx0’s administration manual about T.140.



1. T.140 related standard documents





6)RFC4103 – RTP Payload for Text Conversation.pdf

2. Major descriptions of implementing T.140

T.140 related descriptions in T-REC-H.323-200002-S!AnnG!PDF-E.

1) H.245 TCS for T.140

In the capabilities exchange, when using a reliable channel, specify:

DataApplicationCapability.application = t140
DataProtocolCapability = tcp

In the capabilities exchange, when using an unreliable channel, specify:

DataApplicationCapability.application = t140
DataProtocolCapability = udp

2) H.245 Open Logical Channel

In the Open Logical Channel procedure, specify:

OpenLogicalChannel.forwardLogicalChannelParameters = dataType
DataType = data

And select a reliable or unreliable channel for the transfer of T.140 data by specifying the DataApplicationCapability and the DataProtocolCapability as above.

According to the description in T-REC-H.224-200501-I!!PDF-E, there should be only one H.221 channel, we can still send multiple protocols, like FECC, T.120 and T.140, in one single channel, this type of channel has a name: H.221 MLP data channel.

3) Packetization of T.140 data

Reliable TCP mode: skipped because don’t find any newlly established TCP connections.

UnReliable mode: I do find an H.224 capability in both of these entities, since there is no OLC requests other than Audio, Video, and H.224 data.

Let’s suppose they are re-using the H.221 MLP data channel for both FECC and T.140 transmission.

4) H.224 protocol octet structure

H.224 protocol octet structure

5) H.224 -Standard Client ID Table

H.224 -Standard Client ID Table

3. H.224 data packets sending between TE60 and T800

I managed to extract the H.224 data packets from the PCAP file.

And they are like these:

7e 7e 7e 00 86 c0 00 00 00 00 00 03 40 00 81 a8 e8 0f b2 07 db 07 9f 9f 9f bf ff

Explain the packet by the standard document’s description:




7e 7e 7e Flag Flag
00 Upper DLCI Q.922 Address Header
86 Lower DLCI, 0x6 or 0x7 + EA
C0 UI Mode Q.922 Control Octet(s)
00 Upper Destination Terminal address Data Link Header
00 Lower Destination Terminal address
00 Upper Source Terminal address
00 Upper Source Terminal address
00 Standard Client ID
03 ES + BS
40 00 81 a8 e8 0f b2 07 db 07 9f 9f 9f bf ff Client data octet Client data octet

Comparing the extracted Standard Client ID with H.224 Standard Client ID Table, we can make out a conclusion for this packet: it’s a CME packet, not a T.140 packet.

Now, since we know how to identify the data type of H.224 data packets, we can judge all the H.224 data packet between TE60 and T800.

TE60 –> T800

7e 7e 7e 00 86 c0 00 00 00 00 00 03 40 00 81 a8 e8 0f b2 07 db 07 9f 9f 9f bf ff

7e 7e 7e 00 86 c0 00 00 00 00 00 03 80 00 80 81 12 c8 7e 7e 7e ff

7e 7e 7e 00 86 c0 00 00 00 00 00 03 40 00 81 a8 e8 0f b2 07 db 07 9f 9f 9f bf ff

7e 7e 7e 00 86 c0 00 00 00 00 00 03 40 00 81 a8 e8 0f b2 07 db 07 9f 9f 9f bf ff

7e 7e 7e 00 86 c0 00 00 00 00 00 03 40 fb c0 c8 a8 bf 3f 3f 7f ff

7e 7e 7e 00 86 c0 00 00 00 00 00 03 40 fb c0 c8 a8 bf 3f 3f 7f ff

T800 –> TE60

7e 7e 7e 00 8e c0 00 00 00 00 00 03 80 00 40 81 f7 00 00 5a 00 00 4c 50 3f 3f 3f 3f 3f 3f 16

7e 7e 7e 00 8e c0 00 00 00 00 00 03 40 00 81 68 a8 0f 92 07 cb 00 28 80 3d f1 ef cf cf cf cf cf cd

7e 7e 7e 00 8e c0 00 00 00 00 80 03 a0 08 0e 45 7e 7e 7e 7e 7e 7e


Among the listed packets, there’s only one packet not a CME packet, which Standard Client ID is 0x80.

According to T-REC-H.323-200002-S!AnnG!PDF-E.pdf, we should reverse the octet value by bit to get the real value, and the reversed real value would be 0x01, after check with the Standard Client ID, we know it’s a FECC packet, still not T.140.


God, I lost. Anyone can tell me how to get T.140 work on ZTE T800 and HUAWEI TE60?

An example of AAC capability in H.245

Got mails continuously from everywhere throwing question to me about AAC audio in H.323.

So I arranged this post to example my previous posts: http://rg4.net/archives/1480.htmlhttp://rg4.net/archives/1126.htmlhttp://rg4.net/archives/1112.html

The pcap file for this example can be downloaded here: HUAWEI_TE600-vs-ZTE_T800.pcapnp

Here it is.

1. Basic knowledge: AAC LD descriptions in 14496-3

It operates at up to 48 kHz sampling rate and uses a frame length of 512 or 480 samples, compared to the 1024 or 960 samples used in standard MPEG-2/4 AAC to enable coding of general audio signals with an algorithmic delay not exceeding 20 ms. Also the size of the window used in the analysis and synthesis filterbank is reduced by a factor of 2.

And Table 1.3 — Audio Profiles definition of 14496-3 explained AAC format definition, AAC LC or AAC LD.

2. Basic knowledge: AAC capability in description of H.245 TCS

maxBitRate: 640
ProfileAndLevel: nonCollapsing item –> parameterIdentifier: standard = 0
AAC format: nonCollapsing item –> parameterIdentifier: standard = 1
AudioObjectType: nonCollapsing item –> parameterIdentifier: standard = 3
Config(Including sample rate and channel parameters): nonCollapsing item –> parameterIdentifier: standard = 4
MuxConfig: nonCollapsing item –> parameterIdentifier: standard = 8

3. H.245 TCS of HUAWEI TE60 and ZTE T800

There are two AAC capabilities:
Capability 1:
collapsing item –> parameterIdentifier=2, parameterValue=2
collapsing item –> parameterIdentifier=5, parameterValue=1
ProfileAndLevel: 24
AAC format: logical (0)
AudioObjectType: 23

Capability 2:
collapsing item –> parameterIdentifier=2, parameterValue=2
collapsing item –> parameterIdentifier=5, parameterValue=1
ProfileAndLevel: 24
AudioObjectType: 23

ZTE T800:
There are four AAC capabilities:
Capability 1:
Capability 2:
Capability 3:
Capability 4:

4. Detail parameters in OLC command

TE60 OLC to T800:
maxBitRate: 1280
item 0 –> parameterIdentifier=2, parameterValue=2
item 1 –> parameterIdentifier=5, parameterValue=1
item 0 –> parameterIdentifier=0, value=25
item 1 –> parameterIdentifier=1, value=logical (0)
item 2 –> parameterIdentifier=3, value=23
item 3 –> parameterIdentifier=6, value=logical (0)
item 4 –> parameterIdentifier=8, octetString = 41 01 73 2a 00 11 00
item 5 –> parameterIdentifier=9, octetString = 00 00 00

AOT=23 –> AAC LD
MuxConfig = 41 01 73 2a 00 11 00 –> LATM format
Sample rate = (MuxConfig[2]&0x0f) = 0x73 & 0x0f = 3 = 48K Hz
Channel = (MuxConfig[3]&0xf0)>>4 = (0x2a & 0xf0) >> 4 = 0x20 >> 4 = 2 = Stereo

HUAWEI sent open logical channel with AAC LD stereo to ZTE.

T800 OLC to TE60:
maxBitRate: 1280
item 0 –> parameterIdentifier=2, parameterValue=2
item 1 –> parameterIdentifier=5, parameterValue=1
item 0 –> parameterIdentifier=0, value=25
item 1 –> parameterIdentifier=1, value=logical (0)
item 2 –> parameterIdentifier=3, value=23
item 3 –> parameterIdentifier=6, value=logical (0)
item 4 –> parameterIdentifier=8, octetString = 41 01 73 1a 00 11 00
item 5 –> parameterIdentifier=9, octetString = 00 00 00

AOT=23 –> AAC LD
MuxConfig = 41 01 73 1a 00 11 00 –> LATM format
Sample rate = (MuxConfig[2]&0x0f) = 0x73 & 0x0f = 3 = 48K Hz
Channel = (MuxConfig[3]&0xf0)>>4 = (0x1a & 0xf0) >> 4 = 0x10 >> 4 = 1 = Mono

ZTE sent open logical channel with AAC LD mono to HUAWEI.
Any furture questions?

How to Change the Buffer on VLC

The VLC media player includes file cache and stream buffer options to enable fine-grained control over video playback on machines with limited system resources. If you use VLC to stream network video, you can set the buffer size on a per-stream or permanent basis. For local file playback, you can raise or lower the file cache size to limit the amount of memory VLC uses or the frequency with which it accesses the disk. For systems with low memory, a low cache setting makes more resources available to the operating system.

Permanently Change the Streaming Buffer

  • Click “Tools” and select “Preferences.” In the lower left of the Preferences dialog, select the “All” button under “Show Settings” to display the advanced settings.
  • Select “Stream Output” from the sidebar menu. The setting that affects buffer size is labeled “Stream Output Muxer Caching.”
  • Enter a new amount in milliseconds in the Muxer Caching field. Since this setting requires a value in milliseconds, the amount of memory it uses varies with the streaming video’s quality. If you have ample RAM but a slow network connection, a high setting such as 2,000 ms to 3,000 ms is safe. You may need to experiment to find the right setting for your machine.

Change the Buffer for Individual Streams

  • Press “Ctrl-N” to open a new network stream, then enter a URL in the address field. VLC supports HTTP, FTP, MMS, UDP and RTSP protocols, and you must enter the full URL in the address field.
  • Select “Show More Options” to display advanced settings for the current network stream. The Caching option controls the streaming buffer size.
  • Enter an amount in milliseconds in the Caching field, then click “Play.” Depending on the cache setting, the video may take a few seconds to start streaming.

Change the Buffer for Local Files

  • Click “Tools” and select “Preferences.” In the lower left of the Preferences dialog, select the “All” button under “Show Settings” to display the advanced settings.
  • Select “Input / Codecs” from the sidebar menu, then scroll to the Advanced section in the Input / Codecs panel.
  • Enter a new amount in the File Caching field. The default setting is 300 ms, which results in VLC accessing your disk three times per second. If video playback stutters on your machine, increasing this setting can make it smoother. However, depending on your RAM and CPU resources, you may need to experiment to find the right setting.

Tips & Warnings

  • Information in this article applies to VLC 2.1.5. It may vary slightly or significantly with other versions.

Source: http://www.ehow.com/how_8454118_change-buffer-vlc.html

Vendor ID, Product ID information in SIP

As you may know, to be a robust meeting entity, we must take good care of compatibility requirements for different facilities from different manufacturers.

In H.323 protocol, we can use fields like Vendor ID, Product ID, Version ID in the signaling commands.

But how to do this when you are using SIP protocol?

  1. Definitions in RFC 3261

20.35 Server

   The Server header field contains information about the software used

   by the UAS to handle the request.

   Revealing the specific software version of the server might allow the

   server to become more vulnerable to attacks against software that is

   known to contain security holes. Implementers SHOULD make the Server

   header field a configurable option.


      Server: HomeServer v2

20.41 User-Agent

   The User-Agent header field contains information about the UAC

   originating the request.  The semantics of this header field are

   defined in [H14.43].

   Revealing the specific software version of the user agent might allow

   the user agent to become more vulnerable to attacks against software

   that is known to contain security holes.  Implementers SHOULD make

   the User-Agent header field a configurable option.


      User-Agent: Softphone Beta1.5



  1. [H14.43] User-Agent definition in RFC2616

14.43 User-Agent

The User-Agent request-header field contains information about the user agent originating the request. This is for statistical purposes, the tracing of protocol violations, and automated recognition of user agents for the sake of tailoring responses to avoid particular user agent limitations. User agents SHOULD include this field with requests.

The field can contain multiple product tokens (section 3.8) and comments identifying the agent and any subproducts which form a significant part of the user agent. By convention, the product tokens are listed in order of their significance for identifying the application.

User-Agent     = “User-Agent” “:” 1*( product | comment )


User-Agent: CERN-LineMode/2.15 libwww/2.17b3



  1. How TANDBERG and Polycom implemented?

User-Agent format of TANDBERG 775
Server format of TANDBERG 775


User-Agent format of Polycom

So, jump to the conclusion:

  1. As UAC, identify yourself in User-Agent field.
  2. As UAS, identify yourself in Server field.

Comparing with TANDBERG and POLYCOM’s implementation, TANDBERG format is more proper.

Trouble shooting: step by step to analysis crashes

This post’s goal is to guide a starter to analysis a crash by reading into the assemble code.

But the example listed here is not a good one, because the crash point is not an obvious one, the real reason of the crash for this example is still remain uncovered.

My point here is, we can use a such kind of way to analysis some crash, and once you read this post, you can start the first step. If you run into any problems when analysis your crash, well, we can discuss wih them together here. Here we go. Continue reading “Trouble shooting: step by step to analysis crashes”

A common bug of HD3 series terminals

An issue of call establishment delay when conferencing with Polycom MCU RMX2000

The situation was

1. Meeting entities
1). Polycom MCU: Polycom RMX 2000, version ID: 8.3.0
2). Kedacom HD3 H600 SP4

2. Call scenario
HD3 joined a multi-point conference with RMX2000.
1) All the H.225 and H.245 processes were OK.
2) OLC request from both side returned with ACK.
3) The audio packets could be captured right after the OLC ACK.
4) The video packets from HD3 sent right after got the OLC ACK from the MCU.
5) HD3 could not receive any video packets from the MCU.
6) HD3 waiting for a terminalYouAreSeeing conferenceIndication from the MCU to switch the status to InConf…
7) 20 seconds later we finally got the terminalYouAreSeeing indication. Along with the terminalYouAreSeeing, we got the video.

Seems the MCU was waiting for a command to switch its status to an established-mode.
But we just don’t know what it is, even after tested lot’s of MTs from Polycom, Tendburg, HUAWEI, ZTE, which all of them just working fine.

What we got is only that it must be a HD3’s bug.

After a long long times comparison with the pcap file, the only difference was the H.224 channel.
We did not open the H.224(FECC) channel together with the audio, video and H.239 channel, this caused the RMX2000 to wait 20 seconds to send the terminalYouAreSeeing indication.
It’s a yet-another-long-existing bug, we survived a long time, but today we finally ran into the consequence.

PCAP file: a-common-bug-of-hd3-series-terminals.pcap

RTCP and AVPF related missing features

Most of the missing features are AVPF related, which is defined in RFC4585 and RFC5104.

RFC4585: Extended RTP Profile for Real-time Transport Control Protocol (RTCP)-Based Feedback (RTP/AVPF)
RFC5104:  Codec Control Messages in the RTP Audio-Visual Profile with Feedback (AVPF)

AVPF contains a mechanism for conveying such a message, but did not specify for which codec and according to which syntax the message should conform.  Recently, the ITU-T finalized Rec.H.271, which (among other message types) also includes a feedback message.  It is expected that this feedback message will fairly quickly enjoy wide support.  Therefore, a mechanism to convey feedback messages according to H.271 appears to be desirable.

RTCP Receiver Report Extensions
1. CCM – Codec Control Message
2. FIR – Full Intra Request Command
A Full Intra Request (FIR) Command, when received by the designated
media sender, requires that the media sender sends a Decoder Refresh
Point (see section 2.2) at the earliest opportunity.  The evaluation
of such an opportunity includes the current encoder coding strategy
and the current available network resources.

FIR is also known as an “instantaneous decoder refresh request”,
“fast video update request” or “video fast update request”.

3. TMMBR – Temporary Maximum Media Stream Bit Rate Request
4. TMMBN – Temporary Maximum Media Stream Bit Rate Notification

Example from RFC5104:

Receiver A: TMMBR_max total BR = 35 kbps, TMMBR_OH = 40 bytes
Receiver B: TMMBR_max total BR = 40 kbps, TMMBR_OH = 60 bytes

For a given packet rate (PR), the bit rate available for media
payloads in RTP will be:

Max_net media_BR_A =
TMMBR_max total BR_A – PR * TMMBR_OH_A * 8 … (1)

Max_net media_BR_B =
TMMBR_max total BR_B – PR * TMMBR_OH_B * 8 … (2)

For a PR = 20, these calculations will yield a Max_net media_BR_A =
28600 bps and Max_net media_BR_B = 30400 bps, which suggests that
receiver A is the limiting one for this packet rate.  However, at a
certain PR there is a switchover point at which receiver B becomes
the limiting one.  The switchover point can be identified by setting
Max_media_BR_A equal to Max_media_BR_B and breaking out PR:

TMMBR_max total BR_A – TMMBR_max total BR_B
PR =  ——————————————- … (3)

5. TSTR – Temporal-Spatial Trade-off Request

5. TSTN – Temporal-Spatial Trade-off Request

6. VBCM – H.271 Video Back Channel Message

7. RTT – Round Trip Time
A receiver that receives a request closely after
sending a decoder refresh point — within 2 times the longest round
trip time (RTT) known, plus any AVPF-induced RTCP packet sending
delays — should await a second request message to ensure that the
media receiver has not been served by the previously delivered
decoder refresh point.  The reason for the specified delay is to
avoid sending unnecessary decoder refresh points.

8a. PLI – Picture Loss Indication
8b. SLI – Slice Loss Indication
8c. RPSI – Reference Picture Selection Indication

Here’s a sample INVITE command relaying from FreeSWITCH:

INVITE sip:1009@;transport=tcp SIP/2.0
Via: SIP/2.0/TCP;branch=z9hG4bK6p37yQX86QXar
Route: <sip:1009@>;transport=tcp
Max-Forwards: 69
From: "Extension 1008" <sip:1008@>;tag=DKK4FpBB3ptSS
To: <sip:1009@;transport=tcp>
Call-ID: 0199ec1f-9e53-1233-8583-000c29f7d152
CSeq: 77747697 INVITE
Contact: <sip:mod_sofia@;transport=tcp>
User-Agent: FreeSWITCH-mod_sofia/1.7.0+git~20150614T062551Z~a647b42910~64bit
Supported: timer, path, replaces
Allow-Events: talk, hold, conference, presence, as-feature-event, dialog, line-seize, call-info, sla, include-session-description, presence.winfo, message-summary, refer
Content-Type: application/sdp
Content-Disposition: session
Content-Length: 495
X-FS-Support: update_display,send_info
Remote-Party-ID: "Extension 1008" <sip:1008@>;party=calling;screen=yes;privacy=off

o=FreeSWITCH 1436150633 1436150634 IN IP4
c=IN IP4
t=0 0
m=audio 16890 RTP/AVP 96 0 8 101
a=rtpmap:96 opus/48000/2
a=fmtp:96 useinbandfec=1; stereo=0; sprop-stereo=0
a=rtpmap:0 PCMU/8000
a=rtpmap:8 PCMA/8000
a=rtpmap:101 telephone-event/8000
a=fmtp:101 0-16
m=video 22404 RTP/AVP 96
a=rtpmap:96 H264/90000
a=fmtp:96 profile-level-id=42801F
a=rtcp-fb:96 ccm fir tmmbr
a=rtcp-fb:96 nack
a=rtcp-fb:96 nack pli

Sample SDP of WebRTC for Firefox

GET /socket.io/1/websocket/GgKg1qt9TCXtfPb6n2g0 HTTP/1.1
User-Agent: Mozilla/5.0 (Windows NT 5.1; rv:36.0) Gecko/20100101 Firefox/36.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: zh-CN,zh;q=0.8,en-US;q=0.5,en;q=0.3
Accept-Encoding: gzip, deflate
Sec-WebSocket-Version: 13
Sec-WebSocket-Key: pPQe97SI5k09yaPnVLa2RQ==
Connection: keep-alive, Upgrade
Pragma: no-cache
Cache-Control: no-cache
Upgrade: websocket

HTTP/1.1 101 Switching Protocols
Upgrade: websocket
Connection: Upgrade
Sec-WebSocket-Accept: Lm5/9dDv4pjyphQQeswS+V+AiKc=

...Q^.U...BK......IIP.F....Q'.A...z..T5:::{"name":"log","args":[[">>> Message from server: ","Room foo has 1 client(s)"]]}.`5:::{"name":"log","args":[[">>> Message from server: ","Request to create or join room","foo"]]}.$5:::{"name":"joined","args":["foo"]}.B5:::{"name":"emit(): client GgKg1qt9TCXtfPb6n2g0 joined room foo"}..$.N...t _. {I.l ..+iW.)...l{V.=8..l}K.noW.<:I.*sE..g.Z5:::{"name":"log","args":[[">>> Message from server: ","Got message: ","got user media"]]}.~.x5:::{"name":"message","args":[{"type":"offer","sdp":"
o=mozilla...THIS_IS_SDPARTA-38.0 2820695485956467000 0 IN IP4
t=0 0
a=fingerprint:sha-256 7C:7B:AE:C2:AE:ED:14:39:A4:7A:EE:4B:FB:FE:90:90:E8:A1:0B:C1:50:FC:C8:9C:FA:28:68:22:EE:1C:F6:97
a=group:BUNDLE sdparta_0 sdparta_1
a=msid-semantic:WMS *
m=audio 9 RTP/AVP 109 9 0 8
c=IN IP4
a=extmap:1 urn:ietf:params:rtp-hdrext:ssrc-audio-level
a=msid:{69b2b229-1dc0-4291-a703-aafe505d477b} {ebc6bb1c-8525-4a70-9601-354b53c5c103}
a=rtpmap:109 opus/48000/2
a=rtpmap:9 G722/8000/1
a=rtpmap:0 PCMU/8000
a=rtpmap:8 PCMA/8000
a=ssrc:4051396866 cname:{f0f8a3ab-8c54-4694-872a-98dd14f0c821}
m=video 9 RTP/AVP 126 97
c=IN IP4
a=fmtp:120 max-fs=12288;max-fr=60
a=fmtp:126 profile-level-id=42e01f;level-asymmetry-allowed=1;packetization-mode=1
a=fmtp:97 profile-level-id=42e01f;level-asymmetry-allowed=1
a=msid:{69b2b229-1dc0-4291-a703-aafe505d477b} {34f61d33-9fe4-42ff-8e2b-ef9c465c6f67}
a=rtcp-fb:120 nack
a=rtcp-fb:120 nack pli
a=rtcp-fb:120 ccm fir
a=rtcp-fb:126 nack
a=rtcp-fb:126 nack pli
a=rtcp-fb:126 ccm fir
a=rtcp-fb:97 nack
a=rtcp-fb:97 nack pli
a=rtcp-fb:97 ccm fir
a=rtpmap:126 H264/90000
a=rtpmap:97 H264/90000
a=ssrc:3993721606 cname:{f0f8a3ab-8c54-4694-872a-98dd14f0c821}



About H.235 encryption algorithms

[20150822 Update] For the record, I didn’t find the right ITU-REC document when I wrote this post, and misguided by a claim of HUAWEI VP9650 which says it has AES256 supported, but when I sending out a call from VP9650, it showed a new DH group DH1536, so I made my conclusion arbitary, DH1536 means AES256, obviously, it’s a terrible mistake.

Planning to upgrade to H.235 encryption from AES128 to AES256, but don’t know where to start.

Did not find a short way to achieve that.

We wished to get some informations about AES256 by capturing some pcap files of some other Video Conference solution providers.

But did not find a device which actually supports H.235 + AES256 feature.

So we returned to the ITU-REC document for more details.  The right ITU-REC should be T-REC-H.235.6-201401-I!!PDF-E.pdf.

Some key steps of implementing H.235.

1. SETUP: Caller send a public key token DHSet of in H.225 SETUP, which includes:

1) halfkey: contains the random public key of one party

2)modsize: contains the DH-prime

3)generator: contains the DH-group

1. Public key token DHSet of in H.225 SETUP
1. Public key token DHSet of in H.225 SETUP

2. CONNECT: Callee generate a private key token by using the public key of caller, and send it back to the caller in its H.225 CONNECT, also including halfkey, modsize and generator.

2. Private key token of DHSet in H.225 CONNECT
2. Private key token of DHSet in H.225 CONNECT

3. TCS: Both caller and callee send their H.245 TCS with H.235 capabilities.

3. Sample H.245 TCS with H.235 capability
3. Sample H.245 TCS with H.235 capability

4. MasterSlave determination

5. Master generate a media key which will be used to encrypt/decrypt the media.

5a. OLC(from master): Open a logical channel with a specified H.235 media, and send to the slaver

H235Key in H.245 OLC
H235Key in H.245 OLC

5b. OLC ACK(from master): Reply the media key to the OLC requester(Slaver)

H235Key in H.245 OLC ACK
H235Key in H.245 OLC ACK

6. Some other H.245 request/indication messages, such as encryptionUpdateRequest, encryptionUpdate.

H.235 encryption related node definitions:

1. The tokenOID in H.225 SETUP and CONNECT message:
1a. H600’s tokenOID

1b. Huawei MCU’s tokenOID
ProductId: VP9650, versionId: V200R001C02B018SP07 Apr 28 2014 16:15:31+08
1. Public key token DHSet of in H.225 SETUP - HUAWEI-MCU-VP9650 3. Sample H.245 TCS with H.235 capability - HUAWEI-MCU-VP9650

1c. Huawei TE40’s tokenOID
ProductId: TEx0, versionId: Release


“T”      {itu-t (0) recommendation (0) h (8) 235 version (0) 2 5} {itu-t (0) recommendation (0) h (8) 235 version (0) 1 5} Used in Procedures I and IA as the baseline ClearToken for the message authentication and replay protection and optionally also for Diffie-Hellman key management as described in D.7.1.
“DH1024” {itu-t (0) recommendation (0) h (8) 235 version (0) 2 43} 1024-bit DH group
“DH1536” {itu-t (0) recommendation (0) h (8) 235 version (0) 3 44} 1536-bit DH group
From chapter D.11 List of object identifiers of <T-REC-H.235-200308-S!!PDF-E.pdf>, P.70-71

[20150822 Update]

Earlier when I wrote this post, I didn’t find the right ITU/T standard(T-REC-H.235.6-201401-I!!PDF-E.pdf).

The information I got was that HUAWEI VP9650 supports AES256, and when I tried to sending out a call on VP9650, I got  to know it supports following DH groups:


So I made my conclusion arbitarty that DH1536 is our goal: AES256, but turned out I was terribly wrong. (I don’t know why VP9650 sending out max to DH1536 while it claims having AES 256 supported)

DH group - DH1536

2. Media encryption algorithm definitions on H.245 TCS, OLC, OLC ACK,etc:

The most frequently used/seem types are:
2a. AES 128 bit CBC: 2.16.840.
2b. DES 56 bit CBC(Voice encryption using DES in CBC mode and 512-bit DH-group):

2.16.840. – id-aes128-ECB
2.16.840. – id-aes128-CBC
2.16.840. – id-aes128-OFB
2.16.840. – id-aes128-CFB
2.16.840. – id-aes128-GCM
2.16.840. – id-aes-CCM
2.16.840. – id-aes192-ECB
2.16.840. – id-aes192-CBC
2.16.840. – id-aes192-OFB
2.16.840. – id-aes192-CFB
2.16.840. – id-aes192-GCM
2.16.840. – id-aes192-CCM
2.16.840. – id-aes256-ECB
2.16.840. – id-aes256-CBC
2.16.840. – id-aes256-OFB
2.16.840. – id-aes256-CFB
2.16.840. – id-aes256-GCM
2.16.840. – id-aes256-CCM

Source : http://www.alvestrand.no/objectid/2.16.840.

About DH key exchange:
1. http://baike.baidu.com/view/551692.htm
2. http://www.rosoo.net/a/201507/17349.html
3. Diffie-Hellman, http://www.cryptopp.com/wiki/Diffie-Hellman
4. rfc3526: More Modular Exponential (MODP) Diffie-Hellman groups for Internet Key Exchange (IKE), https://www.ietf.org/rfc/rfc3526.txt
5. Huawei VP9650(which claims having AES256 supported): http://e.huawei.com/cn/related-page/products/enterprise-network/telepresence-video-conferencing/infrastructure/vp9600/TPVC_MCU_VP9600

Add compilation time cost for each source file in make file

Encountered an issue of extreme long time compiling days ago, so we tried to add a time cost output in the Makefile to locate what on earth happened during the compilation.

There are two Makefiles need to be modified, one is Linux based Makefile, another is Android based Makefile

Linux Makefile:

## Rules
## Suffix rules
$(SRC_DIR)/%.o: $(SRC_DIR)/%.s
    $(CC) -c -o $@ $(CFLAGS) $<
$(SRC_DIR)/%.o: $(SRC_DIR)/%.cpp
    @now=`date`; echo "==========compile at $${now}"
    $(CC) -c -o $@ $(CFLAGS) $<


@echo "Compilation begin at `date`"
@echo Compilation in progress, please wait ...
@sleep 1
@now=`date`; echo "Compilation end at $${now}"

Test Makefile: http://rg4.net/p/tools/add-time-output-to-makefile/common.mk

Android Makefile:
You need to modify NDK common make file, definitions.mk, to archive this, it’s locating at /path-to-ndk/android-ndk-r7c/build/core/definitions.mk

_CC   := $$(NDK_CCACHE) $$($$(my)CXX)

# Jacky, add timestamp to the compile output
# 1) for linux
# NOW := `date`
# 2) for windows
NOW        := %TIME:~0,10%

_TEXT := "Compile++ $$(call get-src-file-text,$1), start at: $$(NOW)"

$$(eval $$(call ev-build-source-file))

Test Makefile: http://rg4.net/p/tools/add-time-output-to-makefile/definitions.mk

A simple guide of starting use EasyRTC

Working on a Loongson(www.loongson.cn) PC, goal is to make it a Meeting Terminal. Because its CPU is MIPS arch, there could be lots of unexpected problems, so the first thought hit us is WebRTC.

This post is the first step to research into this topic.

Continue reading “A simple guide of starting use EasyRTC”

A simple script to sync a forked repository with the source repository

The mechanism is simple like this:

1. Pull/Clone your own repository(which was forked from another repository) to a local PC.
2. Add remote source repository to a tag of the local copy.
3. Do local merge for the two repositories.
4. Resolve the conflications if exists.
5. Commit the local copy to your repository.
6. Done and do some check.

#Sync the forked repository with the original source

#1. First clone the repository of your own.
#Skip this step if you already have a local copy of your own fork repository
git clone https://github.com/jackyhwei/nginx-rtmp-module
cd nginx-rtmp-module

#2. Add remote repository

#Add a tag for your repository and point to the origin source repository
git remote add jackyhwei https://github.com/arut/nginx-rtmp-module

#3. Fetch the newly added tag source
git fetch jackyhwei

#4. Merge the newly fetched source to master
git merge jackyhwei/master

#Manually resolve the conflications if there exists
git commit -m "merged by jackyhwei"

#5. Push the codes to your repository. BTW: git push requires you input your username and password.
git push -u origin master

#6. Check local info
git remote -v  
git branch -a

If you’v made some modification for your own repository, there will be an error like below when you do the push:

Permission denied(publickey).
fetal: The remote end hung up unexpectedly.

It’s because you didn’t add a public key for your repository, use following script to add it:

cd ..
mv .ssh ssh_bak
ssh-keygen #this command will generate a public/private rsa key pair for you.
cd .ssh
ls #there would be two files: id_rsa and id_rsa.pub
vi id_rsa.pub

Copy the contents in id_rsa.pub to your clipboard, and go to the repository administration web page, select the Deploy Keys menu, and Add the deploy key(which is in your clipboard) to it.

Try push again.