Changes

Jump to: navigation, search

IPC Protocols

2,803 bytes added, 18:09, 18 May 2009
u
'''TODO''' more details
 
== Deadlocks ==
 
Deadlocks can be caused by bad sequences of synchronous messages (possibly RPC also, but this isn't completely understood yet). Luckily, it is easy and cheap to detect whether sending a synchronous IPC message will immediately cause deadlock.The algorithm is:
 
Actor: # something that can receive a synchronous message
Multiset blocked = { }; # Actors blocked on this target
synchronousSend(target, msg):
blocked = blocked.union(self)
if target in blocked:
deadlock!
transportLayer.send(blocked.serialize(), msg.serialize())
blocked = blocked.difference(self)
synchronousReceive(msg):
blocked = blocked.union(msg.blocked)
# [1] handle message
transportLayer.reply()
blocked = blocked.difference(msg.blocked)
 
Note that the <code>blocked</code> variable needs to be a Multiset, and <code>Multiset.difference()</code> needs to only remove one instance of each element in the differenced set, to make |synchronousReceive()| work correctly. This should be efficient because the |blocked| multiset shouldn't grow very big, and we should be able to identify an IPC target by perhaps a 4-byte ID. So serializing |blocked| is cheap.
 
Doing a dynamic "may-deadlock" analysis, like what we for locks and monitors, is more expensive. First, we need to keep state that is global across all processes. Second, when calling |synchronousSend()|, we need to send another asynchronous IPC message to whichever process manages the global state. But it's doable, and we can choose the granularity at which we detect possible deadlocks:
# Actor x may deadlock with y
# Protocol x may deadlock with y
# ProtocolMessage x may deadlock with y
Each of these represents a compromise of precision with number of deadlocks detected. Option (1) detects the most deadlocks, but is very imprecise. Option (3) detects the fewest, but is most precise. Option (2) is somewhere in the middle. It may be best to start with option (1), and if there are too many false positives, go down to (2). And again, if there are too many false positives, we can switch to (3).
 
Doing a static "may-deadlock" analysis appears to be hard, but tractable (for two processes at least). Researchers have done similar things in the past, and it is known approximately what is required. However, we will need some machinery in our static analysis toolkit that currently doesn't exist.
== Implementation ==
Protocol instances will not be thread safe in the usual multi-threaded, shared-memory sense; they will instead be "thread safe" in the sense of HTML5 Worker Threads: share-nothing. We should use these protocols to avoid multi-threaded, shared-memory code.
 
We will do much of our static checking with the [http://spinroot.com/spin/whatispin.html SPIN model checker]. Our PDL compiler will know how to generate SPIN models, and additionally, we can use treehydra to create more complicated models from the Mozilla source itself. This could be used to check invariants across protocols in addition to within protocols.
Confirm
699
edits

Navigation menu