CIS 307: Remote Procedure Calls

[mc], [Comer-Stevens Example], [rdb], [XDR], [ONC], [mc1], [mc2], [mc3]

[Before reading the following notes you should read Tanenbaum-vanSteen's textbook from page 69 to page 77 and from 375 to 381.]

Much of what we do when we communicate using sockets is standardized. How we establish connections between clients and servers, how data is packed in messages and extracted from them, how we organize the services on the server side as traditional functions. So it is not surprising that people have come up with methods for mechanising these activities. Systems where these mechanisms are available are called Remote Procedure Call (RPC) systems and they involve [among other things]:

These systems are usually compatible with a series of transport mechanisms and protocols. A number of different systems for RPC are currently available. The one we will use is Open Network Computing (ONC), sunrpc, since it is available for free on most Unix systems. Other systems, such as RPC for the Distributed Computing Environment (DCE) are more comprehensive and a natural evolution of the ONC functionality. You may want to check the manual pages for the unix commands portmap [it maps RPC program numbers to the ports where they listen] and rpcinfo [used to ask the portmap of a host what are the RPC services available at that host], and see the file /etc/rpc [it lists the RPC services available on this host, what are the corresponding ports, and execution command].

The normal way to use an RPC system is by defining an interface between the client and the server and by compiling it with a protocol compiler (the protocol compiler we use is called rpcgen). This produces a number of files that are linked with the client and server code written by the programmer to produce the needed executable images. This way of doing things is extremely convenient and requires very little knowledge of protocols on the part of the programmer. Alternatively the protocol compiler is not used and the programmer uses directly the Application Programming Interface (API) to the RPC. We will follow the first approach.

To make our discussion concrete, here is a simple example of client server interaction using ONC RPC.

MC: A Simple Server: A Micro Calculator

This example is clearly unrealistic: nobody would create a server that implements functions to add/subtract two integers and return the result. The purpose of the example is to show how RPC works, with as few distractions as possible. [A more recent version compiled and working on Linux is here.] Here are the files specified by the programmer:

mc.x

Let's examine mc.x:
  /*
   * mc.x: remote calculator access protocol
   */
   /* structure definitions*/
  struct mypair {
    int arg1;
    int arg2;
  };

  /* program definition, no union or typdef definitions needed */
  program MCPROG { /* could manage multiple servers */
	version MCVERS {
		int ADD(mypair) = 1;
		int SUBTRACT(mypair) = 2;
	} = 1;
  } = 0x20000002;  /* program number ranges established by ONC */
Here we see the definitions of the functions to be called remotely, and of the constants and types needed in those definitions. All is written in a language very similar to C called RPCL, that represents the commands provided by the server and their possible parameters.
Nothing interesting about the definitions of mypair. The program definition is more interesting:
The program will be identified (within the server host) by its name MCPROG set to the number 0x20000002 [I have chosen this number. I could have chosen any number between 0x20000000 and 0x3FFFFFF] and its version MCVERS set to the number 1 [I have chosen this number. I could have chosen 1, or 2, or ..]. The individual functions are identified as 1 (for add) and 2 (for subtract). Though the definition in mc.x of these functions does not make it clear, they access their arguments and return their values by address, that is in C we will have to use "&" and "*" appropriately. In addition the calls from the client will have as a last parameter a "client handler", where a client handler is an interface representing a specific association between a client and a server [a client program could have simultaneously a number of handles to a number of servers]. Notice that the functions are receiving the operation arguments combined into a single parameter of type mypair. This is not very convenient but the protocol compiler we are using is requiring that we combine all the call parameters into one. [Other versions of the protocol compiler are supposed to give as an option the use of multiple parameters.]

mc.c

The client code is written with knowledge of the information specified in the interface and with knowledge of three RPC library functions, clnt_create, clnt_pcreateerror, and clnt_destroy but not of socket commands like socket, bind, etc. The programmer will need knowledge of additional RPC library functions only if additional functionality is desired [say, for reliability or authentication or concurrency]. Here is an example of how clnt_create and clnt_pcreateerror are used:
    CLIENT         *cl;    /* a client handle */
    if (!(cl = clnt_create(argv[1], MCPROG, MCVERS, "tcp"))) {
      /*
       * CLIENT handle couldn't be created, server not there.
       */
      clnt_pcreateerror(argv[1]);
      exit(1);
    }
where MCPROG and MCVERS are defined, as you saw, in mc.x and are the name and version of the remote program, while argv[1] is the name of the server host (soemthing like snowhite.cis.temple.edu). The client handle returned by clnt_create will be used as last parameter in RPC calls.

The name of the port used by the server is not given. This is so because of the presence of a daemon called portmapper. The server registers with portmapper the port it uses and the client implicitly asks the portmapper for the name of the port [portmapper is itself a service responding on port 111]. The programmer can use directly the API for the portmapper, see for example the functions pmap_getport, pmap_set, pmap_unset in the man pages.

Here is how the client program calls the remote functions:

      v = (*add_1(&p,cl));
      v = (*subtract_1(&p,cl));
where cl is the client handler, v is an integer, and p is a "pair structure" with two integers. Notice the name we have used, "add_1", in lowercase it is the name "ADD" we introduced in mc.x, with appended the version number.
Since we are using TCP, a TCP session will be established between client and server when clnt_create is first called. This connection will remain in place until clnt_destroy is called.
The client code is written as a "main program".

mc_svc_proc.c

This code is written with knowledge of the information specified in the interface but without need of socket and network commands or of RPC library functions. It is not written as a "main program", it consists just of the functions that will be called remotely and of auxiliary data structures and definitions. As you can see, it is trivial code:

  #include <stdio.h>
  #include <rpc/rpc.h>
  #include "mc.h"

  int v;

  int *add_1(mypair *p) {
    v = p->arg1+p->arg2;
    return &v;
  }
  int *subtract_1(mypair *p) {
    v = p->arg1-p->arg2;
    return &v;
  }

rpcgen

The programmer next compiles the interface definition mc.x using as compiler, called a protocol compiler, rpcgen. [You may want to check the manual page for rpcgen.] The compilation generates: Here is the log of how the executables for the client mc and the server mc_svc are created:
  rpcgen mc.x
  cc -c -o mc.o -g -DDEBUG mc.c 
  cc -g -DDEBUG -c mc_clnt.c
  cc -g -DDEBUG -c mc_xdr.c
  cc -g -DDEBUG -o mc mc.o mc_clnt.o mc_xdr.o  
  cc -c -o mc_svc_proc.o -g -DDEBUG mc_svc_proc.c
  cc -g -DDEBUG -c mc_svc.c
  cc -g -DDEBUG -o mc_svc mc_svc_proc.o mc_svc.o  mc_xdr.o 
Then the server will be launched as just
  mc_svc &
(we use '&' so that the server runs in the background). A client will be launched as
  mc serverhostname
For example if the server is on yoda.cis.temple.edu, we will call
  mc yoda.cis.temple.edu
It would be possible to use the inetd demon to run servers as needed without having to launch them ourselves as we did above. The use of inetd would require root privilege so as to modify the files /etc/setvices and /etc/inetd.conf. [inetd is a super-server. It is launched when unix is started and monitors the ports specified in /etc/services for the services specified in /etc/inetd.conf. Then when these ports are accessed, it launches the corresponding servers, if they are not already active. The servers so launched can be given a deadline so that if they are not active for more than the specified deadline, they terminate. Of course the aim is to minimize the number of idle active servers. This idea of a server whose business is to monitor the existence of regular servers and minimize the number of executing servers and effort required to manage servers, is a powerful one. It is carried out at a greater extent and a higher level in the ORBs of CORBA, which we will discuss later in the course.]

Beware

If you examine the code in mc_svc_proc.c you will see that the functions add_1 and subtract_1, as they are written by the user, have only one parameter. Yet if you look at the program that calls these functions mc_svc.c you will see that the calls made to add_1 and subtract_1, done through the function pointer local, have two arguments, one is the argument argument which we expected, the second is an extra argument rqstp of type struct svc_req (on my system this type is defined in /usr/include/rpc/svc.h). This second parameter conveys a lot of information about the RPC call, from the protocol used, to the service, function, version, raw data received, etc.
  1. This argument does not affect the called function, if it is not used there. Just look at the stack of the functions being called
                     +------------+        ^
                     |  older fp  |        |  High Memory
                     +------------+        |
                     | locals and |
                     | temporaries|
                     | of caller  |
                     +------------+        |
                     |  arg 2     |        |  Stack growth
                     +------------+        V
                     |  arg 1     |
                     +------------+
                     |  return    |
                     +------------+
                     |  old fp    | <-- frame pointer (fp)
                     +------------+
                     | locals and |
                     | temporaries|
                     | of callee  |
                     +------------+ <-- stack pointer (sp)
    
    The second argument does not affect the caller (since it knows that there are two arguments) nor the callee (since the positions of return and arg1 relative to the frame pointer are not affected by arg2).
  2. The people that write the code for the called functions (add_1 and subtract_1 in our case) they can, if desired take over and use the information in the second parameter as they wish for debugging or to communicate directly with the RPC client.

Comer and Stevens Example

In Internetworking with TCP/IP, Vol 3 Comer and Stevens suggest 8 steps in developing a distributed application using RPC:
  1. Build and test a conventional application [i.e. one that runs on a single system] that solves the problem.
  2. Divide the program by choosing a set of procedures to move to a remote machine. Place the selected procedures in a separate file.
  3. Write an rpcgen specification for the remote program, including names and numbers for the remote procedures and the declarations of their arguments. Choose a remote program number and a version number (usually 1).
  4. Run rpcgen to check the specification and, if valid, generate the four source code files that will be used in the client and server.
  5. Write stub interface routines for the client side and server side.
  6. Compile and link together the client program. It consists of four main files: the original application program (with the remote procedures removed), the client-side stub (generated by rpcgen), the client-side interface stub, and the XDR procedures (generated by rpcgen). When all these files have been compiled and linked together, the resulting executable program becomes the client.
  7. Compile and link together the server program. It consists of four main files: the procedures taken from the original application that now comprise the remote program, the server-side stub, (generated by rpcgen), the server-side interface stub, and the XDR-procedures (generated by rpcgen). When all these files have been compiled and linked together, the resulting executable program becomes the server.
  8. Start the server on the remote machine and then invoke the client on the local machine.
An example by Comer-Stevens demonstrates their approach. dict.c is the conventional solution of the problem: the interaction with a dictionary program. in is a file with commands to be given to the dictionary system. The conventional program is divided into two programs by choosing the functions to move to a remote machine. rdict.x is the specification of the interface to the dictionary stystem. rpcgen compiles rdict.x producing four files rdict.h, rdict_clnt.c, rdict_svc.c, and rdict_xdr.c. Then the programmer writes client side stub interface routines rdict_cif.c and server side stub interface routins rdict_sif.c. Then the client is compiled and linked
  gcc -o rdict rdict.c rdict_clnt.c rdict_xdr.c rdict_cif.c
and so is the server
  gcc -o rdictd rdict_svc.c rdict_xdr.c rdict_sif.c rdict_srp.c
Or, better, use the available Makefile, then run the server with
  rdict &
and the client with
  rdict < in

The RDB Example

In Power Programming with RPC Bloomer has a number of examples of use of Remote Procedure Calls. Here we see one such application. It is a simple program where clients can send requests to a data base administered by a server. It is very similar to the mc application we have already seen. Here are the files specified by the programmer:

From rdb.x the protocol compiler rpcgen generates:

Here is the log of how the executables for the client rdb and the server rdb_svc are created:
  rpcgen rdb.x
  cc -c -o rdb.o -g -DDEBUG rdb.c 
  cc -g -DDEBUG -c rdb_clnt.c
  cc -g -DDEBUG -c rdb_xdr.c
  cc -g -DDEBUG -o rdb rdb.o rdb_clnt.o rdb_xdr.o  
  cc -c -o rdb_svc_proc.o -g -DDEBUG rdb_svc_proc.c
  cc -g -DDEBUG -c rdb_svc.c
  cc -g -DDEBUG -o rdb_svc rdb_svc_proc.o rdb_svc.o  rdb_xdr.o 
Then the server will be launched as just
  rdb_svc
and any client will be launched as
  rdb serverhostname dbkey dbvalue

XDR

The XDR protocol is used to represent data in a machine and language independent form. It is defined in RFC1014. When a sender sends data to a receiver, it converts it from the local form (normally expressed in C) to the XDR form and sends the XDR form. When the receiver receives the data, it converts it from XDR to its local form (normally expressed in C).

If you are using a protocol compiler for the RPC, you have no need to know anything about XDR and its API. If you are not using a protocol compiler then you need to know the XDR API. Information about it can be found, say, with the command

  man xdr
Here is an example of use of the XDR API to write and read from a file in XDR format. It is the program portable.c from Bloomer's book.
  #include <rpc/xdr.h>
  #include <stdio.h>

  short           sarray[] = {1, 2, 3, 4};

  main()
  {
    FILE           *fp;
    XDR             xdrs;
    int             i;

    /*
     * Encode the 4 shorts.
     */
    fp = fopen("data", "w");
    xdrstdio_create(&xdrs, fp, XDR_ENCODE);
    for (i = 0; i < 4; i++) if (xdr_short(&xdrs, &(sarray[i])) == FALSE)
      fprintf(stderr, "error writing to stream\n");

    xdr_destroy(&xdrs);
    fclose(fp);

    /*
     * Decode the 4 shorts.
     */
    fp = fopen("data", "r");
    xdrstdio_create(&xdrs, fp, XDR_DECODE);
    for (i = 0; i < 4; i++) if (xdr_short(&xdrs, &(sarray[i])) == FALSE)
      fprintf(stderr, "error reading stream\n"); else printf("%d\n", sarray[i]);

    xdr_destroy(&xdrs);
    fclose(fp);
  }

ONC

The Open Network Computing (ONC) API is essentially irrelevant to the programmer that uses a protocol compiler [only clnt_create is required, and clnt_pcreateerror and clnt_destroy are desireable]. It becomes relevant if we do things like authentication, or complex recovery procedures, or we want to have a server interacting concurrently with multiple clients, etc. We will next see how to use fork on the server side.

MC1: The Micro Calculator: Forking a server for each request

We want now to fork on the server side a different process to handle each client request. We will do two changes in the previous code: No other code is modified.

MC2: The Micro Calculator: Creating a Thread for each request

Nothing here changes with respect to mc1, except that we modify mc2_svc.c to create threads instead of forking. There is an added complexity because we want to have threads that operate each on different data.

MC3: The Micro Calculator: Now the Threads use locks

We are now using locks to synchronize threads in our silly micro calculator example. We assume that we keep a global variable that contains the sum of the results of all past operations. We then use a mutex variable to insure mutual exclusion between operations. The results of these minor changes are seen in the files mc3_svc.c and mc3_svc_proc.c. All the files are available at mc3. When creating the server image be sure to use the command modifier -threads.

Note that the use of RPC has simplified considerably the task of exchanging information across computers. The situation is considerably easier than in the case that we use sockets directly. The transport mechanism, whether tcp, or udp, or tci etc. is hidden. Of course basic problems, such as how to insure reliability of communication, or recover from crashes, or how best to solve concurrency and performance problems remain.

ingargiola@cis.temple.edu