CIS 307: Homework 3

Using Unix System Services (shared memory, file locks).
Given March 6, 1997, due March 26, 1997 by 10pm.

This homework is a continuation of Homework 2. It will have a similar behavior, but changed implementation. Now we will not have the STORE_MANAGER process. And we will not have the equivalent of the old main program. Now the RAND_PROC processes will be main programs, each run on a separate window in your workstation. You may run as many of these processes as you want, but at least two. The RAND_PROC processes are "clients" that do themselves the work of the "server".

The RAND_PROC processes now share a segment of their virtual memories. This shared segment will contain all the data structures previously managed by the STORE_MANAGER process plus an integer field called COUNT. This brings to the problem

Some complications are due to the fact that the process STORE_MANAGER does not exist anymore. Thus the requests from the RAND_PROC processes are not serialized by the STORE_MANAGER and you have to use appropriate locks within the code executed by the RAND_PROC processes. Since there are no pipes, all the commands (STORE_READ and STORE_UPDATE) will be carried out as function calls from the RAND_PROC processes, with the appropriate parameters, and will return the appropriate values when the operation is completed.
As side effect of each call, a record will be written to the CHILDprocid.DAT (now the procid is the actual process id; by the way, use this process id as seed for the random number generator) file with the following information:

To create critical regions use file locking. Use a separate lock for each entry of the store plus a lock for COUNT (use a single file, say, filelocks.dat, with the different locks represented by different records in the file). See the example on Read and Write locks to see how to protect read and write operations.

Other complications arise since we are using shared memory.
To share memory among processes use the shmget, shmat, shmdt, shmctl operations. shmget creates a segment. It should be invoked with as key IPC_PRIVATE. It should be created by the first RAND_PROC process, deleted by the last one. The process that creates the shared segment should save the id of the segment to a file, segid.dat. The number of running processes will be stored in COUNT.
The RAND_PROC processes check on the existence of the segid.dat file. [For simplicity we assume that all the RAND_PROC processes are yours and they are run from a single directory which is yours.]
If segid.dat is not there, they should

  1. create and initialize [by reading its content from the init.dat file] the segment,
  2. set COUNT to 1,
  3. create the segid.dat file and write the segment id to it.
If segid.dat is there they just read the segment id from the segid.dat file and increment COUNT.
Once a RAND_PROC process has the id of the segment, the process attaches to it with shmat. It will then as in homework 2 execute randomly TABLE_READ and TABLE_UPDATE. As in homework 2 these are slow operations. [But now we need not have an agenda, we just sleep in between the beginning and the end of an operation.]
In addition a RAND_PROC process interacts with the user by displaying to the user a menu with the following entries:
  1. Print the Number of Read and Write operations completed by this process
  2. Terminate.
Since a RAND_PROC process has to do two concurrent kinds of activities:
  1. interact with the user, and
  2. interact with the table
it would be great if we had threads. But we don't yet. So we will cheat. Assume that after each operation on the table we will check if there is input from the user [this is a form of non-blocking IO; we check if there is input, if yes we read it and execute appropriate action. See hints], in which case we do what requested before executing, if necessary, the next command on the table. [Be sure to display correctly the menu.]

At termination a RAND_PROC process goes through a complex sequence of actions as at initialization.

To protect the initialization phase [where you check on segid etc] and the termination phase [where you decrement COUNT etc.] you should use the lock on COUNT.

Often people forget or are unable to delete memory segments. Use at the shell level the command ipcs to determine what shared resources are being used in the system and ipcrm to remove them if no longer needed.

ingargiola@cis.temple.edu