This commit is contained in:
Ken McMillan 2016-09-15 15:35:06 -07:00
Родитель 44141c496f
Коммит 6cce55a9b4
4 изменённых файлов: 654 добавлений и 3 удалений

328
doc/examples/sht/proto.md Normal file
Просмотреть файл

@ -0,0 +1,328 @@
---
layout: page
title: Protocol
---
Now we get to the [actual protocol](proto.ivy). Each host has a hash
table and a delegation map describing where to find ranges of
keys. When the host receives a request to get or set the value of a
key in the directory, it looks up the key in the delegation map. If
the key is delegated to the local host, the operation is performed
directly on the host's hash table. Otherwise, the request is
forwarded to the host indicated in the delegation map. Since the
delegation map of the local host may be out of date, the forwarded
request may be forwarded repeatedly until it reaches the host with
actual responsibility for the key. When this host receives the
forwarded request, it performs the operation on its local hash table
and returns a reply directly to the original requesting host. This
host then responds to the client with an answer.
Initially, all keys belong to the master server numbered `0`. A host
that serves a range of keys can delegate that range to another host by
extracting a shard and sending it in a *delegate* message. The host
receiving the delegate message incorporates the shard into its own
hash table. Both hosts update their delegation maps
accordingly. Hosts don't make any attempt to advertise the keys that
they own to other hosts. This means that a request for a key must
follow the original chain of delegation to get to the owner of the
key.
# Specification
Here is the client-facing interface of the protocol:
module sht_protocol(me,ref,trans,id,key,data,shard) = {
action set(k:key.t,d:data) returns(ok:bool, tx:ref.txid)
action get(k:key.t) returns(ok:bool, tx:ref.txid)
action answer(k:key.t,d:data,tx:ref.txid)
action delegate_(dst:id, lo:key.iter.t, hi:key.iter.t) returns(ok:bool)
...
The parameters are `me`, the host's id, `ref`, the [reference
object](reference.html), `trans` the [transport service](trans.html)
and data types `id`, `key`, `data` and `shard` (`data` is the type of
values in the directory).
Notice that `set` and `get` return a parameter of type `ref.txid`, or
transaction id. This is a ghost type defined by the reference object
that is used to define linearizability. Transaction id's don't
actually appear in the compiled implementation. When the protcol calls
back to `answer`, it provides a ghost transaction id that allows us to
*specify* the correct data.
Here is the specification of this interface:
object spec = {
before set {
tx := ref.begin(me,ref.write,k,d)
}
before get {
tx := ref.begin(me,ref.read,k,0)
}
before answer {
assert ref.data_(tx) = d
call ref.end(tx)
}
before delegate_ {
assert dst ~= me;
assert lo < hi;
assert key.iter.between(lo,K,hi) -> impl.dm.map(K,me)
}
}
The reference object is used to provide the specification of the
interface actions. In particular, calls to `set` and `get` begin new
transactions. The reference object's action `ref.begin` creates a new
transaction and provides the transaction id. Notice that the
transaction id is actually assigned in the specification
monitor. Normally specification monitors only make assertions about return
values. The actual return values are provided by the
implementation. However in the case of ghost values it is normal for
the specification itself to provide them.
The call-back `answer` ends the transaction and asserts that the data
it provides is correct for that transaction according to the reference
object. Notice that this is a *before* specification because `answer`
is a call-back. We expect the protocol to guarantee this assertion
before calling `answer`. If this is true, then we know that there
exists a linearization of the transactions that is consistent with the
answers we provide.
The specification of `delegate_` is interesting because it has no
post-condition. This is because `delegate_` actually has no visible
effect to the user of the interface (other than, hopefully, on
performance). The pre-condition says we cannot delegate to ourselves,
we have to delegate at least one key and we must own all the keys that
are delegated. There is a slight violation of abstraction here, since
we refer to the abstract state of the delegation map, which is
internal. It would be better form to make a copy of this map and
describe the way that `delegate_` updates it. This way users of the
interface wouldn't have to peek inside the implementation to prove the
pre-condition. For now, though, we'll be lazy and not do that, since
this pre-condition is just an environment assumption that we won't
actually prove.
# Implementation
The implementation is, of course, the interesting part. We start by defining
a concrete `struct` for requests that we can pass over the network:
object req = {
type t = struct {
src : id,
rkey : key.t,
rtype : ref.otype,
rdata : data,
rtxid : ref.txid
}
}
The `src` field allows the reply to be routed back to the source. The
key, operation type and data describe the requested operation. The
transaction id is ghost and is used only for specification purposes.
The implementation of a host contains a [hash table](table.html) and a [delegation map](delmap.html):
object impl = {
instance hash : hash_table(key,data,shard)
instance dm : delegation_map(key,id)
...
Here is the internal action that handles a request, either locally, or by forwarding it:
action handle_request(rq:req.t) returns(ok:bool) = {
local src:id, k:key.t, op:ref.otype, d:data, tx:ref.txid, ow:id {
src := req.src(rq);
k := req.rkey(rq);
op := req.rtype(rq);
d := req.rdata(rq);
tx := req.rtxid(rq);
ow := dm.get(k);
if ow = me {
call ref.commit(tx); # this is ghost!
if op = ref.read {
req.rdata(rq) := hash.get(k)
}
else {
call hash.set(k,d)
};
ok := trans.send_reply(me, src, rq)
} else {
ok := trans.send_request(me, ow, rq) # forward request
}
}
}
We start by extracting the fields of the request. Then we call the
delegation map `dm` to see who (we think) owns the request key. If the
owner is `me` (the local host) we call `ref.commit` to indicate that
the transaction is being commited now. This is possible because the
transaction id is a ghost field of the request. In the compiled code,
the reference object will be abstracted away, so this call will do
nothing. Then we execute the request. If it's a read, we get data from
the hash table. If a write, we set data. The we send the modified
request back to the original requester as a reply. On the other hand,
if the owner is not `me` we forward the request to the owner. In
either case, to send a protocol message, we call the `trans` service.
Now we use `handle_request` to implement `set` and `get`. Here is `set`:
implement set {
local rq:req.t {
req.src(rq) := me;
req.rkey(rq) := k;
req.rtype(rq) := ref.write;
req.rdata(rq) := d;
req.rtxid(rq) := tx; # ghost!
ok := handle_request(rq)
}
}
It just builds a request and calls `handle_request`. Notice though,
that the transaction id generated by the specification monitor (which
runs before the implementation) is inserted in the request. The implementaiton of `get` is similar:
implement get {
local rq:req.t {
req.src(rq) := me;
req.rkey(rq) := k;
req.rtype(rq) := ref.read;
req.rtxid(rq) := tx;
ok := handle_request(rq)
}
}
The `delegate_` action is quite simple:
implement delegate_ {
call dm.set(lo,hi,dst);
ok := trans.send_delegate(me,dst,hash.extract_(lo,hi))
}
We set the range of keys in the delegation map, extract a shard and
call the transport service to send the shard.
Now we have to handle the incoming messages by implementing the
call-back actions of the transport service. Incoming forward request
are handled like this:
implement trans.recv_request(rq:req.t) {
local ok : bool {
ok := handle_request(rq)
}
}
That is, we just call `handle_request`. We ignore the `ok` flag
returned by `handle_request`. This means that if we don't have
resources to handle the request, the request just gets dropped. A good
exercise would be to and a return parameter to `trans.recv_request` so
that in case the request can't be served immediately, the packet will
not be acknowledged and will then be retried. This won't happen with
the current implementation of `queue`, however, which doesn't put a
limit on the number of enqueued messages (in practice this means that
if the queue exceeds the available menory resources, the server
process will stop).
Here is how we handle incomping replies:
implement trans.recv_reply(rq:req.t) {
call answer(req.rkey(rq),req.rdata(rq),req.rtxid(rq))
}
That is, we just call back to `answer` with the information in the
reply (including thre ghost transaction id).
Finally, we handle incoming delegation messages like this:
implement trans.recv_delegate(s:shard.t) {
call dm.set(shard.lo(s),shard.hi(s),me);
call hash.incorporate(s)
}
That is, we add key range of the shard to our delegation map,
indicating the local host as owner. Then we incorporate the shard into
our hash table.
That's it for the protocol.
# Invariants
A fair number of invariants are needed in the proof of the protocol to
cover the various kinds of protocol messages that can be in transit
and to make sure that the global protocol state is consistent.
The global protocol state invariants are:
# If I own this key, then my hash table data matches the reference object
conjecture impl.dm.map(K,me) -> hash.hash(K) = ref.map(K)
# If I own this key, then no one else does
conjecture impl.dm.map(K,me) & X ~= me -> ~proto(X).impl.dm.map(K,X)
The second invarient refers to `proto`, which is a collection of all
hosts. This will be defined when we [instantiate the protocol](sht.md).
The invariants for delegation message in transit are:
# If I own this key, then no delegated shard does
conjecture proto.impl.dm(me).map(K,me)
-> ~(trans.delegated(X,S) & key.iter.between(shard.lo(S),K,shard.hi(S)))
# No two delegated shards have keys in common
conjecture trans.delegated(X,S) & key.iter.between(shard.lo(S),K,shard.hi(S))
& trans.delegated(X1,S1) & key.iter.between(shard.lo(S1),K,shard.hi(S1))
-> X = X1 & S = S1
# Delegated shards have correct data relative to reference object
conjecture trans.delegated(X,S) & key.iter.between(shard.lo(S),K,shard.hi(S))
-> shard.value(S,K) = ref.map(K)
# Every shard in transit is valid
conjecture trans.delegated(D,S) -> shard.valid(S)
Notice we state that data in a shard in transit have to reflect all
commited transactions. This key invariant holds because data in
transit is owned by no one and thus can't be modified.
For forwarded requests, we have:
# Forwarded requests have correct operations relative to the reference object
conjecture trans.requested(D,R) & L = req.rtxid(R)->
(req.rkey(R) = ref.key_(L) &
req.rtype(R) = ref.type_(L) &
(req.rtype(R) = ref.write -> req.rdata(R) = ref.data_(L)))
# All forwarded requests have been generated but not committed
conjecture trans.requested(D,R) -> ref.generated(req.rtxid(R)) & ~ref.committed(req.rtxid(R))
# No two forwarded requests with the same txid
conjecture trans.requested(D1,R1) & trans.requested(D2,R2) & req.rtxid(R1) = req.rtxid(R2)
-> D1 = D2 & R1 = R2
Notice how we use the reference object to specify correctness of data
in flight. This is possible because of the ghost transaction id's in
the messages.
For forwarded replies, we have similar invariants:
# Forwarded replies have correct operations relative to the reference
conjecture trans.replied(D,R) & L = req.rtxid(R)->
(req.rkey(R) = ref.key_(L) &
req.rtype(R) = ref.type_(L) &
req.rdata(R) = ref.data_(L))
# All forwarded replies have been generated and committed
conjecture trans.replied(D,R) -> ref.generated(req.rtxid(R)) & ref.committed(req.rtxid(R))
# No two forwarded replies with the same txid
conjecture trans.replied(D1,R1) & trans.replied(D2,R2) & req.rtxid(R1) = req.rtxid(R2)
-> D1 = D2 & R1 = R2
Notice that replies differ from requests in that they represent committed transactions.
These invariants are inductive and imply that the protocol satisfies
its service specification. We'll prove that in the next section.

Просмотреть файл

@ -45,5 +45,5 @@ object debug = {
import debug.send
import debug.recv
extract iso_impl(me:id) = key,shard,num,trans.impl(me),proto.impl(me),udp.impl(me),debug(me)
extract iso_impl(me:id) = key,shard,num,trans.impl(me),proto.impl(me),udp.impl(me)

321
doc/examples/sht/sht.md Normal file
Просмотреть файл

@ -0,0 +1,321 @@
---
layout: page
title: Final Assembly
---
Now we [put all the components together]() to compile a protocol
implementation we can run. First we have to instantiate the modules we have created so far:
include proto
include reference
include key
include trans
include udp
include shard
include seqnum
type id
type data
instance ref : sht_reference(id,key,data)
instance shard : table_shard(key,data)
instance num: sequence_numbers
instance proto(X:id) : sht_protocol(X,ref,trans,id,key,data,shard)
instance trans : sht_transport(udp,proto.req.t,shard.t,num,id)
instance udp : udp_simple(id,trans.net_msg.t)
Notice that the protocol module is instantiated with a paremeter
`X:id`. This creates one process for each process id. We instantiate
`trans` and `udp` after `proto` because of a technicality: IVy allows
forward references to objects but not to types. Since we need
`proto.req.t` (the request message type) in `trans` and
`trans.net_msg.t` (the low-level message type) in `udp`, we have to
instantiate them in this order (or, of course, predefine these types).
Now we actually prove the protocol. We use three isolates for this:
isolate iso_p = proto with ref,trans,key,shard
isolate iso_dm(me:id) = proto.impl.dm(me) with key
isolate iso_hash(me:id) = proto.impl.hash(me) with key,shard
The first verifies the protocol itself. The other two verify the
delegation map and the hash table inside `proto` using parameter
stripping. We need to do this since IVy doesn't yet allow isolates
within parameterized objects. The idea here is that, since the
delegation maps and hash tables of the different processes are
independent, we can verify just one with a symbolic parameter value
`me`.
We also need to export the protocol's interface actions to the environment:
export proto.set
export proto.get
export proto.delegate_
import proto.answer
Notice the `answer` is a call-back and therefore an import.
# Proof checking
Before compiling, let's check the whole proof:
ivy_check sht.ivy
Checking isolate iso_dm...
trying ext:proto.impl.dm.get...
checking consecution...
trying ext:proto.impl.dm.set...
checking consecution...
Checking isolate iso_hash...
trying ext:proto.impl.hash.extract_...
checking consecution...
trying ext:proto.impl.hash.get...
checking consecution...
trying ext:proto.impl.hash.incorporate...
checking consecution...
trying ext:proto.impl.hash.set...
checking consecution...
Checking isolate iso_p...
trying ext:proto.delegate_...
checking consecution...
trying ext:proto.get...
checking consecution...
trying ext:proto.set...
checking consecution...
trying ext:shard.arr.get...
checking consecution...
trying ext:trans.recv_delegate...
checking consecution...
trying ext:trans.recv_reply...
checking consecution...
trying ext:trans.recv_request...
checking consecution...
Checking isolate key.iso...
trying ext:key.iter.create...
checking consecution...
trying ext:key.iter.end...
checking consecution...
Checking isolate num.iso_props...
trying ext:num.iter.create...
checking consecution...
trying ext:num.iter.end...
checking consecution...
trying ext:num.next...
checking consecution...
Checking isolate shard.index.iso...
trying ext:shard.index.next...
checking consecution...
Checking isolate trans.impl.iso_mq...
trying ext:trans.impl.mq.delete_all...
checking consecution...
trying ext:trans.impl.mq.empty...
checking consecution...
trying ext:trans.impl.mq.enqueue...
checking consecution...
trying ext:trans.impl.mq.pick_one...
checking consecution...
Checking isolate trans.iso...
trying ext:trans.impl.timer.timeout...
checking consecution...
trying ext:trans.send_delegate...
checking consecution...
trying ext:trans.send_reply...
checking consecution...
trying ext:trans.send_request...
checking consecution...
trying ext:udp.recv...
checking consecution...
OK
Yikes, that's a lot of checking. IVy grinds it out in less than a
minute, however, using the power of
[Z3](https://github.com/Z3Prover/z3).
# Compile and run
To actually run, we'll need to interpret the process id type:
object impl = {
interpret id -> bv[1]
}
We could also interpret the type `data` (the directory values) but
just for testing, the compiler's default interpretation `int` will do.
We create an extract:
extract iso_impl(me:id) = key,shard,num,trans.impl(me),proto.impl(me),udp.impl(me)
Notice the extract is parameterized. We want to compile a binary
implementing just one host. The implementations that have a host id
parameter are therefore stripped. The `key`, `shard` and `num`
objects aren't stripped because they are stateless.
We compile:
$ make sht
ivy_to_cpp target=repl isolate=iso_impl sht.ivy
g++ -g -o sht sht.cpp
Since our process id type is `bv[1]` (one-bit binary numbers) we have two process ids.
We fire up terminals A and B, and run the two process:
A: $ ./sht 0
A: >
B: $ ./sht 1
B: >
Remember that process `0` is the master which initially owns all the
keys. Let's do some local operations:
A: > proto.get(7)
A: true
A: > proto.answer(7,0,0)
A: proto.set(7,42)
A: true
A: > proto.answer(7,42,0)
A: proto.get(7)
A: true
A: > proto.answer(7,42,0)
So we can get and set values locally. Let's try remotely from process 1:
B: > proto.get(7)
B: true
B: > proto.answer(7,42,0)
B: proto.set(7,66)
B: true
B: > proto.answer(7,66,0)
A: proto.get(7)
A: true
A: > proto.answer(7,66,0)
So we can get and set values remotely. Let's try delegating a shard:
A: proto.delegate_(1,{is_end:false,val:2},{is_end:false,val:12})
A: true
A: > proto.get(7)
A: true
A: > proto.answer(7,66,0)
B: proto.get(7)
B: 0
B: > proto.answer(7,66,0)
This looks good, but it's bit hard to tell if the delegation actually
did anything, since the answers don't change.
# Debug monitoring
It would be nice to be able to observe the packets being exchanged to
see if the expecting things are happening. There's an easy way to do
this: compile in a debug monitor. Here's an example:
object debug = {
action send(dst:id,m:trans.net_msg.t)
action recv(dts:id,m:trans.net_msg.t)
before udp.send(src:id,dst:id,msg:trans.net_msg.t) {
call send(dst,msg)
}
before udp.recv(dst:id,msg:trans.net_msg.t) {
call recv(dst,msg)
}
}
import debug.send
import debug.recv
This monitor synchronizes with the low-level message receive and send
calls `udp.send` and `udp.recv`. In each case it just calls back to the environment,
passing the destination address and message as parameters.
Now we have to add the object `debug` to our extract so it doesn't get
abstracted away:
extract iso_impl(me:id) = key,shard,num,trans.impl(me),proto.impl(me),udp.impl(me)
Let's run this extract:
A: ./sht 0
A: > proto.get(7)
A: debug.send(0,{mty:reply_t,src:0,rq:{src:0,rkey:7,rtype:0,rdata:0,rtxid:0},num:0,sh:{lo:{is_end:0,val:0},hi:{is_end:0,val:0},pairs:[]}})
A: true
A: > debug.recv({mty:reply_t,src:0,rq:{src:0,rkey:7,rtype:0,rdata:0,rtxid:0},num:0,sh:{lo:{is_end:0,val:0},hi:{is_end:0,val:0},pairs:[]}})
A: debug.send(0,{mty:ack_t,src:0,rq:{src:0,rkey:0,rtype:0,rdata:0,rtxid:0},num:0,sh:{lo:{is_end:0,val:0},hi:{is_end:0,val:0},pairs:[]}})
A: proto.answer(7,0,0)
A: debug.recv({mty:ack_t,src:0,rq:{src:0,rkey:0,rtype:0,rdata:0,rtxid:0},num:0,sh:{lo:{is_end:0,val:0},hi:{is_end:0,val:0},pairs:[]}})
That's a lot just to get the value of key 7 locally. What happened
here? It looks like process 0 handled the request locally, then sent a
reply message to itself for key 7 with value 0. It then recieved the
reply from itself, sent back an acknowledgment to itself, and
presented the answer. Finally, it received the acknowledgment from
itself.
This seems a bit roundabout for a purely local transaction. Might it
make more sense to call back immediately without sending and receiving
a message? The answer is probably not. If we called back to `answer`
from within `get`, the caller would have to deal with the possible
interference caused by `answer`. Probably it's better from a usability
point of view to stash the answer in a message and present it
asynchronously, as if the operation were done remotely.
Now let's try a remote operation:
B: > proto.set(7,66)
B: debug.send(0,{mty:request_t,src:1,rq:{src:1,rkey:7,rtype:1,rdata:66,rtxid:0},num:0,sh:{lo:{is_end:0,val:0},hi:{is_end:0,val:0},pairs:[]}})
B: true
B: > debug.recv({mty:ack_t,src:0,rq:{src:0,rkey:0,rtype:0,rdata:0,rtxid:0},num:0,sh:{lo:{is_end:0,val:0},hi:{is_end:0,val:0},pairs:[]}})
B: debug.recv({mty:reply_t,src:0,rq:{src:1,rkey:7,rtype:1,rdata:66,rtxid:0},num:0,sh:{lo:{is_end:0,val:0},hi:{is_end:0,val:0},pairs:[]}})
B: debug.send(0,{mty:ack_t,src:1,rq:{src:0,rkey:0,rtype:0,rdata:0,rtxid:0},num:0,sh:{lo:{is_end:0,val:0},hi:{is_end:0,val:0},pairs:[]}})
B: proto.answer(7,66,0)
A: debug.recv({mty:request_t,src:1,rq:{src:1,rkey:7,rtype:1,rdata:66,rtxid:0},num:0,sh:{lo:{is_end:0,val:0},hi:{is_end:0,val:0},pairs:[]}})
A: debug.send(1,{mty:ack_t,src:0,rq:{src:0,rkey:0,rtype:0,rdata:0,rtxid:0},num:0,sh:{lo:{is_end:0,val:0},hi:{is_end:0,val:0},pairs:[]}})
A: debug.send(1,{mty:reply_t,src:0,rq:{src:1,rkey:7,rtype:1,rdata:66,rtxid:0},num:0,sh:{lo:{is_end:0,val:0},hi:{is_end:0,val:0},pairs:[]}})
A: debug.recv({mty:ack_t,src:1,rq:{src:0,rkey:0,rtype:0,rdata:0,rtxid:0},num:0,sh:{lo:{is_end:0,val:0},hi:{is_end:0,val:0},pairs:[]}})
From the point of view of process 1, we send a request, get an
acknowledgment, then a reply, then acknowledge the reply, then
answer. From the point of view of process 0, we receive a request,
acknowledge it, send a reply, then receive an acknowledgment for the
reply.
Let's try to delegate:
A: proto.delegate_(1,{is_end:false,val:2},{is_end:false,val:12})
A: debug.send(1,{mty:delegate_t,src:0,rq:{src:0,rkey:0,rtype:0,rdata:0,rtxid:0},num:1,sh:{lo:{is_end:0,val:2},hi:{is_end:0,val:12},pairs:[{p_key:7,p_value:66}]}})
A: true
A: > debug.recv({mty:ack_t,src:1,rq:{src:0,rkey:0,rtype:0,rdata:0,rtxid:0},num:1,sh:{lo:{is_end:0,val:0},hi:{is_end:0,val:0},pairs:[]}})
B: debug.recv({mty:delegate_t,src:0,rq:{src:0,rkey:0,rtype:0,rdata:0,rtxid:0},num:1,sh:{lo:{is_end:0,val:2},hi:{is_end:0,val:12},pairs:[{p_key:7,p_value:66}]}})
B: debug.send(0,{mty:ack_t,src:1,rq:{src:0,rkey:0,rtype:0,rdata:0,rtxid:0},num:1,sh:{lo:{is_end:0,val:0},hi:{is_end:0,val:0},pairs:[]}})
We can see that the shard sent contains the key/value pair (7,66). Let's get the value remotely from process 0:
A: debug.send(1,{mty:request_t,src:0,rq:{src:0,rkey:7,rtype:0,rdata:0,rtxid:0},num:2,sh:{lo:{is_end:0,val:0},hi:{is_end:0,val:0},pairs:[]}})
A: true
A: > debug.recv({mty:ack_t,src:1,rq:{src:0,rkey:0,rtype:0,rdata:0,rtxid:0},num:2,sh:{lo:{is_end:0,val:0},hi:{is_end:0,val:0},pairs:[]}})
A: debug.recv({mty:reply_t,src:1,rq:{src:0,rkey:7,rtype:0,rdata:66,rtxid:0},num:1,sh:{lo:{is_end:0,val:0},hi:{is_end:0,val:0},pairs:[]}})
A: debug.send(1,{mty:ack_t,src:0,rq:{src:0,rkey:0,rtype:0,rdata:0,rtxid:0},num:1,sh:{lo:{is_end:0,val:0},hi:{is_end:0,val:0},pairs:[]}})
proto.answer(7,66,0)
B: debug.recv({mty:request_t,src:0,rq:{src:0,rkey:7,rtype:0,rdata:0,rtxid:0},num:2,sh:{lo:{is_end:0,val:0},hi:{is_end:0,val:0},pairs:[]}})
B: debug.send(0,{mty:ack_t,src:1,rq:{src:0,rkey:0,rtype:0,rdata:0,rtxid:0},num:2,sh:{lo:{is_end:0,val:0},hi:{is_end:0,val:0},pairs:[]}})
B: debug.send(0,{mty:reply_t,src:1,rq:{src:0,rkey:7,rtype:0,rdata:66,rtxid:0},num:1,sh:{lo:{is_end:0,val:0},hi:{is_end:0,val:0},pairs:[]}})
B: debug.recv({mty:ack_t,src:0,rq:{src:0,rkey:0,rtype:0,rdata:0,rtxid:0},num:1,sh:{lo:{is_end:0,val:0},hi:{is_end:0,val:0},pairs:[]}})
As expected, the request gets forward from process 0 to process
1. Notice also that we can see the sequence number incrementing in
successive packets.
Even after formal verification, there is still value
in testing. For example, we may have performance problems, or there
may be important properties that we didn't specify. In this case, we
were able to observe by testing that delegation is actually occurring,
which is not formally specified. In addition, we haven't proved any
progress properties. For all we know, our protocol can get into a
deadlock situation and stop responding. In the next section, we
discuss how to test more rigorously than we just did, to gain more confidence in our protocol implementation.

Просмотреть файл

@ -333,7 +333,7 @@ def p_pname_relop(p):
p[0] = App(p[1])
p[0].lineno = get_lineno(p,1)
def p_pname_relop(p):
def p_pname_this(p):
'pname : THIS'
p[0] = App(This())
p[0].lineno = get_lineno(p,1)
@ -948,7 +948,9 @@ def p_top_interpret_symbol_arrow_symbol(p):
def p_top_interpret_symbol_arrow_lcb_symbol_dots_symbol_rcb(p):
'top : top INTERPRET oper ARROW LCB SYMBOL DOTS SYMBOL RCB'
p[0] = p[1]
thing = InterpretDecl(mk_lf(Implies(p[3],Range(p[6],p[8]))))
imp = Implies(p[3],Range(p[6],p[8]))
imp.lineno = get_lineno(p,4)
thing = InterpretDecl(mk_lf(imp))
thing.lineno = get_lineno(p,4)
p[0].declare(thing)