Trouble shooting: step by step to analysis crashes

This post’s goal is to guide a starter to analysis a crash by reading into the assemble code.

But the example listed here is not a good one, because the crash point is not an obvious one, the real reason of the crash for this example is still remain uncovered.

My point here is, we can use a such kind of way to analysis some crash, and once you read this post, you can start the first step. If you run into any problems when analysis your crash, well, we can discuss wih them together here. Here we go. Continue reading “Trouble shooting: step by step to analysis crashes”

Debugging & troubleshooting tutorial: core dump debug

As you know, there are LIMIT settings in Linux OS which may have something to do with some frequent-happen-issue. Such as stack size, open files, core file generation, etc.

Here are some tips about ulimit, core, and some debugging tricks relevant.

1. ulimit comamnd


To view your OS limitations, you can use command:

ulimit -a

It will show you the current OS limitations for your environment like this:

[weiguohua@localhost ~]$ ulimit -a

core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
file size               (blocks, -f) unlimited
pending signals                 (-i) 1024
max locked memory       (kbytes, -l) 32
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 16384
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

2. Enable core file

If you wish to generate a core file when your program crashed, you can use a command like this:

ulimit -c 123456789

The number following ulimit -c is the max core file size, you can set a number as you wish.

And there are rules for core file name, they are saved in:

1) /proc/sys/kernel/core_pattern

Used to control the core file name format
Default value: core
You can change it to “/home/weiguohua/corefile/core-%e-%p-%t” which means all the core file will be saved to the folder “/home/weiguohua/corefile”, and the core file name will be “core-[program name]-[pid]-[crash-time]”.

echo /home/weiguohua/corefile/core-%e-%p-%t > /proc/sys/kernel/core_pattern

 2) /proc/sys/kernel/core_uses_pid

0: Default setting, means the core file will not suffixed with a PID.
1: Add a PID suffix to the core file, such as “core.9721”

You can change it by command:

echo 1 > /proc/sys/kernel/core_uses_pid

You can man core to reference the naming rule of core dump files.

3. Debug a core file by GDB

Debugging a core file in some way is nothing different with debugging the program in a normal way.
Assume your program name is segment.out, and the generated core file name is core.9721, you can start your debug by command:

gdb ./segment.out core.9721

Then your can do anything you want for the core file, such as bt, p, etc.

P.S. If a core dump happened, and you can not locate the function by gdb backtrace, you can:
1) Use ldd to check out the so files(dynamic link libraries) this program depends on.
2) Use nm to list the symbols of the so files, and search the addresses in the core file.
3) Locate the functions by comparing the addresses in core file and the symbol map listed by nm command.

4. Memory maps for a running program

You can view the address map & range for a running program by simply cat the maps file in the /proc/[pid] folder.
It will print out the detail memory map of all the symbols the program loaded, including the so files it depended on.

[weiguohua@localhost 9721]# pwd
[weiguohua@localhost 9721]# cat maps

00110000-00113000 r-xp 00000000 fd:00 677857     /lib/
00113000-00114000 r–p 00002000 fd:00 677857     /lib/
00114000-00115000 rw-p 00003000 fd:00 677857     /lib/
00115000-00127000 r-xp 00000000 fd:00 677856     /lib/
00127000-00128000 rw-p 00011000 fd:00 677856     /lib/
00128000-00256000 r-xp 00000000 fd:00 1967906    /usr/lib/mysql/
00256000-0029d000 rw-p 0012e000 fd:00 1967906    /usr/lib/mysql/
0029d000-002c5000 r-xp 00000000 fd:00 677847     /lib/
002c5000-002c6000 r–p 00027000 fd:00 677847     /lib/
002c6000-002c7000 rw-p 00028000 fd:00 677847     /lib/
002c7000-00439000 r-xp 00000000 fd:00 1867809    /usr/lib/
00439000-0044d000 rw-p 00172000 fd:00 1867809    /usr/lib/
0044d000-00450000 rw-p 00000000 00:00 0
00450000-00457000 r-xp 00000000 fd:00 677864     /lib/
00457000-00458000 r–p 00007000 fd:00 677864     /lib/
00458000-00459000 rw-p 00008000 fd:00 677864     /lib/
00459000-00480000 rw-p 00000000 00:00 0
00480000-0054a000 r-xp 00000000 fd:00 677879     /lib/
0054a000-00550000 rw-p 000ca000 fd:00 677879     /lib/
00550000-00595000 r-xp 00000000 fd:00 677863     /lib/
00595000-00596000 rw-p 00045000 fd:00 677863     /lib/
00596000-0059a000 rw-p 00000000 00:00 0
0059f000-005bd000 r-xp 00000000 fd:00 677835     /lib/
005bd000-005be000 r–p 0001d000 fd:00 677835     /lib/
005be000-005bf000 rw-p 0001e000 fd:00 677835     /lib/
005bf000-005c8000 r-xp 00000000 fd:00 677876     /lib/
005c8000-005c9000 rw-p 00008000 fd:00 677876     /lib/
005c9000-005e6000 r-xp 00000000 fd:00 677860     /lib/
005e6000-005e7000 r–p 0001c000 fd:00 677860     /lib/
005e7000-005e8000 rw-p 0001d000 fd:00 677860     /lib/
005f2000-00628000 r-xp 00000000 fd:00 1967263    /usr/libexec/postfix/pickup
00629000-0062a000 r–p 00036000 fd:00 1967263    /usr/libexec/postfix/pickup
0062a000-0062b000 rw-p 00037000 fd:00 1967263    /usr/libexec/postfix/pickup
0062b000-0062c000 rw-p 00000000 00:00 0
00634000-00678000 r-xp 00000000 fd:00 1867822    /usr/lib/
00678000-00679000 rw-p 00044000 fd:00 1867822    /usr/lib/
00679000-00685000 r-xp 00000000 fd:00 655000     /lib/
00685000-00686000 r–p 0000b000 fd:00 655000     /lib/
00686000-00687000 rw-p 0000c000 fd:00 655000     /lib/
006c1000-006e2000 rw-p 00000000 00:00 0          [heap]
006f9000-0086c000 r-xp 00000000 fd:00 655098     /lib/
0086c000-0086f000 rw-p 00173000 fd:00 655098     /lib/
00888000-008a0000 r-xp 00000000 fd:00 1867808    /usr/lib/
008a0000-008a1000 rw-p 00018000 fd:00 1867808    /usr/lib/
009ff000-00a02000 r-xp 00000000 fd:00 677878     /lib/
00a02000-00a03000 rw-p 00002000 fd:00 677878     /lib/
00b40000-00b6f000 r-xp 00000000 fd:00 655464     /lib/
00b6f000-00b70000 rw-p 0002e000 fd:00 655464     /lib/
00bb5000-00bb7000 r-xp 00000000 fd:00 677875     /lib/
00bb7000-00bb8000 rw-p 00001000 fd:00 677875     /lib/
00c30000-00c56000 r-xp 00000000 fd:00 677877     /lib/
00c56000-00c57000 rw-p 00026000 fd:00 677877     /lib/
00c71000-00c72000 r-xp 00000000 00:00 0          [vdso]
00cac000-00ce2000 r-xp 00000000 fd:00 677880     /lib/
00ce2000-00ce3000 rw-p 00036000 fd:00 677880     /lib/
00cfb000-00d12000 r-xp 00000000 fd:00 677844     /lib/
00d12000-00d13000 r–p 00016000 fd:00 677844     /lib/
00d13000-00d14000 rw-p 00017000 fd:00 677844     /lib/
00d14000-00d16000 rw-p 00000000 00:00 0
00dab000-00dc1000 r-xp 00000000 fd:00 655424     /lib/
00dc1000-00dc2000 r–p 00016000 fd:00 655424     /lib/
00dc2000-00dc3000 rw-p 00017000 fd:00 655424     /lib/
00dc3000-00dc5000 rw-p 00000000 00:00 0
00e4f000-00ea2000 r-xp 00000000 fd:00 1867810    /usr/lib/
00ea2000-00ea6000 rw-p 00052000 fd:00 1867810    /usr/lib/
00efc000-00f09000 r-xp 00000000 fd:00 1843537    /usr/lib/
00f09000-00f0a000 rw-p 0000c000 fd:00 1843537    /usr/lib/
00f3b000-00f50000 r-xp 00000000 fd:00 677859     /lib/
00f50000-00f51000 r–p 00014000 fd:00 677859     /lib/
00f51000-00f52000 rw-p 00015000 fd:00 677859     /lib/
00f52000-00f54000 rw-p 00000000 00:00 0
b762c000-b7630000 rw-p 00000000 00:00 0
b7630000-b77b5000 r-xp 00000000 fd:00 677841     /lib/
b77b5000-b77b6000 —p 00185000 fd:00 677841     /lib/
b77b6000-b77b8000 r–p 00185000 fd:00 677841     /lib/
b77b8000-b77b9000 rw-p 00187000 fd:00 677841     /lib/
b77b9000-b77be000 rw-p 00000000 00:00 0
b77d3000-b77d4000 rw-p 00000000 00:00 0
bfdde000-bfdf3000 rw-p 00000000 00:00 0          [stack]

Debugging & troubleshooting tutorial: backtrace + addr2line

Planning to write a series of posts about debugging & trouble shooting tricks. And I’d like to make backtrace as a start. As a typical programmer(AKA nerd), I’d like to jump to the topic directly before run into blah-blah.

Everyone knows, bug is free, so it may be at everywhere, to check out your free gift(s) sending to your clients, you may consider backtrace.
Continue reading “Debugging & troubleshooting tutorial: backtrace + addr2line”

A Simple Tutorial for WANem

Being a networking related product developer, we always face issues which can not be easily reproducted in LAB(because in most cases we have LAN only environments).
Here I introduce you a tool, WANem – a WAN emulator tool from TATA. And we can benifit a lot in reproducing issues or trouble shooting simply by using WANem.

What does WANem do?

WANem is a Wide Area Network Emulator, meant to provide a real experience of a Wide Area Network/Internet, during application development / testing over a LAN environment. Typically application developers develop applications on a LAN while the intended purpose for the same could be, clients accessing the same over the WAN or even the Internet. WANem thus allows the application development team to setup a transparent application gateway which can be used to simulate WAN characteristics like Network delay, Packet loss, Packet corruption, Disconnections, Packet re-ordering, Jitter, etc. WANem can be used to simulate Wide Area Network conditions for Data/Voice traffic and is released under the widely acceptable GPL v2 license. WANem thus provides emulation of Wide Area Network characteristics and thus allows data/voice applications to be tested in a realistic WAN environment before they are moved into production at an affordable cost. WANem is built on top of other FLOSS [Free Libre and OpenSource] components and like other intelligent FLOSS projects has chosen not to re-invent the wheel as much as possible.

Where to download WANem?

WANem is an open-source project, and it hosts on source forge, you can download it here:

Start to use.

1. Start WANem image with VMware

1. knoppix5.3 bootup2. Choose your network configuration, DHCP or manual static IP. Here I’m using static IP.

2. dont use DHCP3. Setup your password for your VM(Knoppix).

3. setup your passsword4. Open a browser and navigate to WANem with an URL like this: http://ip-address/WANem, such as

4. open a browser and navigate to WANem5. Add route rule for your test endpoints/peers

5. add route rule for your test endpoints6. Start WANem by a specified Loss percentage.

6. start WANem7. Run your tests.

A sample route setting

a. Delete default route for your endpoint

route delete

b. Add a force route to WANem

route add mask

* You need to change the IP address with the IP address of your own WANem server.

However, all the data from this endpoint will be forced to route to WANem if this route is applied. If you wish it works only for one another peer IP, you can change the command to

route add mask

This means, if you are connecting or sending data to from this endpoint, all the data packages will first route to (WANem server)

c. Test your route

You can use this command to test the route setting if your application is working under windows:


If you have any questions about using WANem, you can leave a message for me under this post.

How to get Callstack in Android Log File

When you are running into bugs or issues in you Android Apps, callstack information of your components would be very much helpful, defintely.

Here is a post I’v found in freescale community, auther by MingZhou, it instructs you have get the caller stack information for you Android App or components. The original url is:

Continue reading “How to get Callstack in Android Log File”