EE382C Embedded Software Systems
Process Networks Scheduling

Process Networks

A Process Network program is a set of processes that communicate through a network of infinite first-in first-out (FIFO) queues also know as channels. In general, it takes infinite time to decide if a Process Network program can be scheduled in bounded memory. It also, in general, takes infinite time to decide if a Process Network program will terminate (halt).

Tom Parks, in his 1995 dissertation, suggested the following algorithm to schedule Process Network programs:

If a bounded implementation exists, then Park's algorithm will run in bounded memory, although not necessarily in the minimum amount of memory.

Parks' algorithm gives priority to non-terminating execution over bounded execution, and bounded execution is given priority over complete execution. (Parks, March 9, 2004, e-mail correspondence) For example, when Park's algorithm is applied to the following process network, a bounded, non-terminating (but partial) execution of the process network occurs.

A ----->  C
    --->   
   /       
  /        
B ----->  D

In this example, C always reads from the A->C arc; i.e., C never reads from the B->C arc. As a Kahn Process Network, the B->C arc would have infinite build-up because A and B are always enabled.

If we impose a size of one token on each FIFO queue, then B will artificially deadlock on its second attempt to write to the B->C area, and D will block on a read, yet A and C will keep running. If A and C fire at the same rate, then the entire program will not deadlock. Since the entire program does not deadlock, the algorithm Parks proposed in his dissertation to resize queues would never be called. In this case, Parks' approach would appear to find a partial, bounded execution instead of a complete unbounded execution.

Hence, Parks' scheduling algorithm does not always led to complete execution. This feature of Parks' scheduling algorithm is explored further in the following papers:

The 2003 paper above distinguishes between "global" and "local" artificial deadlock, and points out that Parks' 1995 algorithm does not solve the latter. The 2003 paper is concerned with local deadlocks that happen when a bounded schedule exists. On page 330, the authors propose that the smallest queue should be enlarged as soon as there is a "local" deadlock, rather than a global one. (Alex Olson, March 8, 2004, e-mail correspondence).

Comments from Tom Parks (March 9, 2004)

A trivial solution to the difficulties of the above example would be to have C modify the program graph when it knows that it will never again read from B. Just splice in the equivalent of /dev/null to consume and discard any data produced by B.

Instead of taking this easy way out, I would modify the example slightly to make it more interesting. Let C's consumption of data from B depend on the data values from A. For example, C could be an ordered merge with A and B producing monotonic sequences. Let A produce a constant sequence such as { 7, 7, 7, 7, ... } and let B produce the sequence of powers of 2 { 1, 2, 4, 8, ...} on the channel leading to C and anything you like on the channel leading to D. After C reads the token with value 8 from B, it will never again read another token from B (it just keeps reading 7s from A). If you insist on a complete execution, rather than merely a non-terminating execution, then you will have an unbounded execution. Of course any implementation running on a real system will eventually run out of memory, so you won't really have a complete execution, and you might not even have a non-terminating execution depending on how you react to running out of memory.

Now let's make a small change. Let A produce the sequence of consecutive integers { 1, 2, 3, 4, ... } and let B produce the same output as before. This example is clearly bounded, since a channel capacity of 1 token is sufficient to allow complete execution. The tricky part is that as the computation progresses, B will spend more and more time blocked waiting to write to C while C is busy consuming data from A. How long do you let B wait before you give up and increase the channel capacity? If you aren't careful, then you run the risk of unbounded execution.

I could come up with yet another variation on this example such that a bounded, complete execution existed and yet the method described in my thesis would fail to find it. Instead I would get a bounded, non-terminating execution.

In conclusion, I think that always finding a complete (as opposed to merely non-terminating), bounded execution whenever it exists is intractable for the general process network model.


Updated 04/04/04.