Apache Thrift, RPC’s between Java(server)-PHP(client), Hello World Application

A more comprehensive tutorial can be found here.
type(for the pre-requisites): 
sudo apt-get install libboost-dev libboost-test-dev libboost-program-options-dev libevent-dev automake libtool flex bison pkg-config g++ libssl-dev
sudo apt-get install php5-dev php5-cli #for php
sudo apt-get install libglib2.0-dev #for c_glib

For the final installation, download the tarball from the website, http://thrift.apache.org/download/
Then to install the tarball: 

tar -xvf /path/to/tarball
cd /path/to/extraction
sudo make
sudo make install

Also install the eclipse editor for thrift files.
Eclipse –> help –> Install new Software –>
add the URL: http://thrift4eclipse.sourceforge.net/updatesite/
tick the only package shown and install it.

Compiling the REQUIRED LIBRARIES (for the different languages that has to be supported):

  • for JAVA
    Go to folder /path/to/thrift-version/folder/lib/java/
    execute the command “ant” – compiles using apache ant
    Now the build folder contains all the lib files required.
  • for PHP
    No need for compiling any files, php is used in its raw form.

Making the thrift file:

Tutorial can be found here : http://diwakergupta.github.com/thrift-missing-guide/
Thrift file will include all the services and structures shared between the two languages.

Start with:

namespace java package-name

Making the JAVA server:

Make a new project in Eclipse with type, “Dynamic Web Project”.
Put the “thrift file” in the <project-name>/Java Resources/src/ folder.
Copy the lib files (libthrift-<version>.jar, build/lib/*) to <project-name>/WebContent/WEB-INF/lib/ folder.
Generate the auto-generated java files from the file using the command:

cd path/to/thrift-file/
thrift --gen java -out thrift-file-name

Now we have to implement the services mentioned in the thrift-file by:

  • make a new file in the same package <package-name>.
  • Wrie a class <service-implement> implementing <service-name>.Iface (like this implement all the services)

Now we have to make the server file:

public class server_name implements Runnable {
	/* port to listen */
	private static final int PORT = 9090;
	public void run() {
		try {
			TServerSocket serverTransport = new TServerSocket(PORT);
			HelloService.Processor processor = new HelloService.Processor(new ());
			TServer server = new TThreadPoolServer(new TThreadPoolServer.Args(serverTransport).processor(processor));
			System.out.println("Starting server on port: "+PORT);
		} catch(TTransportException e) {
			System.out.println("Message: "+e.getMessage());
			System.out.println("StackTrace: ");

	public static void main(String[] args) {
		new Thread(new server_name()).run();
Run the server as a java application. This completes the making of the server.
NOTE: To stop the server you’ll need to kill the process via the console.
Making the PHP client:
First auto-generate the PHP package from the thrift file already created using the command:
cd path/to/thrift-file/
thrift --gen php thrift-file-name

Create a new directory named “thrift” and copy all the php library files available in the directory /path/to/thrift-version-folder/lib/php/src/ to the newly created directory. Also create a new sub-directory named “packages” in “thrift” directory, and copy the auto-generated PHP package here.

Create a new file <client-file>.php adjacent to the “thrift” directory.
Contents of the PHP file will be:

// defining the port and server to listen
define("PORT", '9090');
define("SERVER", 'localhost');

//Global variable where the php library files are stored
$GLOBALS['THRIFT_ROOT'] = 'thrift';

//including the library files
require_once $GLOBALS['THRIFT_ROOT'].'/Thrift.php';
require_once $GLOBALS['THRIFT_ROOT'].'/protocol/TBinaryProtocol.php';
require_once $GLOBALS['THRIFT_ROOT'].'/transport/TSocket.php';
require_once $GLOBALS['THRIFT_ROOT'].'/transport/TBufferedTransport.php';

//loading the auto-generated package
require_once $GLOBALS['THRIFT_ROOT'].'/packages/hello/HelloService.php';

try {
	//create a thrift connection
	$socket = new TSocket(SERVER, PORT);
	$transport = new TBufferedTransport($socket);
	$protocol = new TBinaryProtocol($transport);
	//create a new hello service client
	$client = new HelloServiceClient($protocol);
	//open the connection
	//calling the service
	$result = $client->sayHello();
	echo "Result: ".$result;
	//closing the connection
} catch(TException $tx) {
	echo "Thrift Exception: ".$tx->getMessage()."\r\n";
Run the JAVA server.
            CONSOLE: “Starting server on port: 9090”
Run the <client-file>.php using the command: php5 client.php
            CONSOLE: “Result: HelloWorld!!”
Finally make a directory “client” and copy the client related files here. Also, make a new directory names “server”, copy all the java server files here. So we have a simple apache thrift application making a bridge between between java(server) and php(client).

Memory profiling in C++

Code profiling as said in the earlier post is the dynamic analysis of resources used by a program or a small section of it.

Here we will discuss about monitoring the memory during a run of a C++ program. Monitoring the memory greatly helps in optimizing your code. Memory leaks (when memory is not released back to the operating system and the operating system stops the program in the middle because of the over-usage of memory), swapping of data (swapping of data between main memory and disk greatly reduces the performance as disk IO is slow compared to main memory IO), free memory available at any point of time, when and where memory is allocated and freed, inaccessible areas of stack data, usage of cache (proper usage of cache memory can increase your performance), heap memory usages.

NOTE: using memory management tools reduces the performance of the program (it could get 100 times slower :O ). So it should only be done for testing, and development and not during production phase.

first let us see the various tools freely available to work for us:

  • debugger – just use your C++ debugger to keep track of memory leaks, memory allocations step-by-step. But, this process is very slow. Instead of compiling using g++, compile the program using “gdb”.
        gdb file.cpp

  • sysstat – just install this package using the command (sudo apt-get install sysstat).
    Using the command “free” will tell you memory statistics at that point of time. (you can use the -m option to show the memory in megabytes).
  • valgrind – This is the best tool available freely. (to install type: sudo apt-get install valgrind). It has the various subtools:
    • Memcheck – When a program is run under memcheck’s supervision, all reads and writes of memory are checked, and calls to malloc/new/free/delete are intercepted. Memcheck reports errors as soon as they occur, giving the source line number at which it occurred, and also a stack trace of the functions called to reach that line.
    •  Cachegrind -its a cache profiler. Tells you which part code has lead to a cache miss. The number of cache misses, number of instructions executed on each line of code.
    • Massif – performs detailed heap profiling. It produces a graph showing heap usage over time, including information about which parts of the program are responsible for the most memory allocations.

    Valgrind can be easily used from the terminal. When calling your program executable just write “valgrind” before the call using the appropriate options, eg:

        valgrind --leak-check=yes <myprog> arg1 arg2

    To know the options and other methods you can refer to its documentation from here.

  • leakfinder – This is a simple GUI application for windows to find leaks in your program with an basic inbuilt code editor.
  • gperftools – this is a tool developed by google for use by developers so that they can create more robust applications. Especially of use to those developing multi-threaded applications in C++ with templates. Includes TCMalloc, heap-checker, heap-profiler and cpu-profiler.
  • dmalloc – This is another tool available on the web.

The concept used in dmalloc is quite a simple one and we can ourselves make a simple library that can keep track of memory allocations and memory release. This concept is based on function and operator overloading. Here we will overload the new/delete operator. Thus, whenever a memory is allocated or freed, we can print the appropriate information. So let’s start,

#include <execinfo.h>   // this is a header file contains the backtrace function

void *caller()
      const int target = 3;     // trace three functions back
      void* returnaddresses[target];
      if (backtrace(returnaddresses, target) < target) {
               return NULL;
      return returnaddresses[target-1];

void* operator new(size_t size) throw(std::bad_alloc) {
       void* ret = malloc(size);
       if (!ret) throw std::bad_alloc();
       cerr<<"allocate: "<<ret<<" "<<size<<" bytes from "<<caller()<<"\n";

void* operator new[] (size_t size) throw(std::bad_alloc) {
       void* ret = malloc(size);
       if (!ret) throw std::bad_alloc();
       cerr<<"allocate: "<<ret<<" "<<size<<" bytes from "<<caller()<<"\n";

void operator delete(void* data) {
       cerr<<"free: "<<data<<"\n";

void operator delete[] (void* data) {
       cerr<<"free: "<<data<<"\n"; 

Just write this code in a file and then you can include this as a header file in your program whenever you want to monitor your memory allocations and who allocates it. There is no need to write any extra code in your program. Similarly, you can overload the malloc/free functions if new/delete does not work for you.

good luck.

    Code Profiling for time in C++

    Code profiling is a very important aspect of programming. First you must be wondering what is code profiling. You can always google it but in simple words “code profiling” is just measuring of the resources used by your program or small sections of your program.

    Here I will be talking about code profiling for “time”. Dynamically measuring your code for how much time it takes for different input sets is of keen importance when you have to optimise your code. There are many ways you can achieve this,

    • there are many unix tools available to do the job for you.
      • time – just type time while calling your executable file.
        eg: time ./a.out
      • sysstat (to install just type “sudo apt-get install sysstat”) – It has many tools available to check the resources used up in the running processes.
        While your program is running, you can check up the resources used up your program by the commands “iostat -i”, “iostat -c”, “iostat -dx”. if networking is involved you can use “netstat -i”, “netstat -s”. To check the memory usage, i.e., free memory and used memory and memory swapped etc., you can type “free -m”.
      • callgrind – you can download this tool to profile your code.
    • The simple way is to put a small code inside your code to measure the time. Here I’ll tell you how to do that using a small library I have written.

     Its use as simple as writing:

    int func() {
        timeit s(“func()”);
        // your code here.

    It is based on a simple concept that when a object goes out of its scope, its destructor is called. So to time a code snippet, you just have to make an object of the type “timeit” having the same scope as the code snippet. In the above example it will print to the standard error:
               func()       3.21554ms

     The object simply times the call between its construction and destruction. If its difficult to maintain the scope of the object by using curly brackets, then you can also use the “new” and “delete” to manually set the scope. The code is rather simple and here it goes:

    #include <iostream>
    #include <ctime>
    class timeit {
        char const *name;
        timespec t_start;
            timeit(char const *temp): name(temp) {
                clock_gettime(CLOCK_REALTIME, &t_start);
            ~timeit() {
                timespec t_end;
                clock_gettime(CLOCK_REALTIME, &t_end);
                double dur = 1e3 * (t_end.tv_sec - t_start.tv_sec) + 1e-6 * (t_end.tv_nsec - t_start.tv_nsec);
                std::cerr << name << '\t' << std::fixed << dur << " ms\n";

    Just include this this code in your program and timeit 😉

      how to hide drive in windows

      Most of you must have wondered if there exists a way without using software to Hide A Drive completely so that it cannot be viewed or accessed without unlocking it. so, here is a easiest method to hide drive simple 4 steps.
      Follow The Steps For Do This..
      • Goto run type “gpedit.msc” (without quotes)
      •  select user configuration —> administrative templates —> windows components —> windows explorer(click on it)
      • On right panel search for hide theses specified drives in my computer.
      • Double click it mark on enable to hide the drive and select drive the which you want to hide. If u wanted to hide all the drives which are on your system then select restrict all drive. To make it unhidden mark on disable option.   

      How to hide files in windows

      i guess this one is fairly important for “guys”…so just go through it…I assure you, you will find this quite important…:D

      This thing is a basic feature provided by the NTFS file system. It also provides the feature of encrypting a file with your own “encryption” method but I’ll come back later to it. Now lets just focus on the “hiding a file”.

      NTFS supports multiple data streams. Actually you must have seen the .mkv movie file, its most astonishing feature is that it can contain various types of data in one file like the multiple audios, multiple subtitle files, and yes the movie :P. So,basically in NTFS one file contain multiple data streams independent of each other. In a layman language, NTFS can have a file containing other files totally independent of each other. for e.g., a simple .pdf can contain files like .txt, .avi, .zip, etc..

      This feature can be exploited to hide files on the NTFS partition. For hiding, all we need to do is to create a data stream in the container file ( say important.txt) and copy the contents of the same hiding file (say porn.avi ) in that data stream. and one more thing, you can hide as many files as you want in one single file.

      So here’s the step by step process :
      1.  install some virtual linux environment. That can be anyone of them – cygwin or UnixUtils. This is just to use to use the “cat” command.

      2. Now open the command prompt and write,

          cat porn.avi > important.txt:stream_name

      this stream_name can be any secret name you prefer. e.g.,

          cat porn.avi > important.txt:porn.avi

      3. the file is now saved in data stream “important.txt:porn.avi“. The file porn.avi now can be safely deleted without losing any data.

      4. Now to extract back the file you just have to write the command,

          cat important.txt:porn.avi > porn.avi

      this will give you the file back you wanted. 😉

      This method is excellent as the data stream cannot be easily detected as it does not even increases the size of the container file. But it has some limitations :

      1. This works only in NTFS file system.

      2. If the file is transferred to any other file system (FAT32, EXT2, etc.) all the data streams are lost.

      3. A data stream cannot be directly accesses through a software. So it has to first extracted to be of use. This feature is not so annoying as this is something we are already been used to by the compressing softwares.

      4. various meta-data related to a file are deleted in the data stream.

      5. data stream once created cannot be deleted. One way is to make a copy of the file and delete the previous one. e.g.,
                cat important.txt > copy_important.txt
                del important.txt

      6. data recovery tools do not handle data streams. so if a file system gets corrupted. there is no way to get back the file.

      7. you have to remember the exact name of the stream to everytime extract the file.