add README.txt for test corpus directory

This commit is contained in:
Mark Hammond 2011-04-18 14:08:28 +10:00
Родитель 8878255ff7
Коммит 5b2a2dd239
1 изменённых файлов: 81 добавлений и 0 удалений

Просмотреть файл

@ -0,0 +1,81 @@
This directory contains a corpus used for testing. Each test is in its
own directory. The intent is that they allow full end-to-end testing of
F1 - from the initial request into F1, the request and response F1 makes
to the actual services, and the final response from F1.
In general, these test cases were first generated by the "protocol capture"
code (see linkoauth/protocap.py) - each capture was then edited and moved
into this corpus with an appropriate name.
Each directory uses a naming convention of "protocol-host-req_type-comments"
where
* protocol is either 'http' or 'smtp'
* host is the actual hostname the connection is made to.
* req_type is one of 'auth', 'contacts' or 'send'.
* Comments are free form.
The test runner parses the names so it can work out exactly how to test, so
you must use the names above. Hopefully that will wind up going away and
being smarter without making assumptions about the dir name.
Test Types
----------
There are 2 types of tests this is designed to support.
1) 'Unexpected' service error response handling
In this kind of test we are checking how F1 behaves when it makes a valid
request but the service returns an unexpected error code due to a transient
error on that service. In this type of test, the actual incoming F1 request
and the content of the request made to the service isn't that important - the
service just had a transient error unrelated to the input data.
To make these tests more convenient, some tests don't have any input data
defined - the test runner just synthesizez a simple request, ignores the
content of the request F1 made to the service and just returns the error
response. The final F1 response given that error response is checked and
that's about it.
2) F1 functionality tests
In this kind of test, the things we are testing depend directly on the
incoming F1 request - the content of the request dictates the request we
make to the service (eg, a 'direct' message versus a 'public' message.)
These tests generally have the full set of input requests specified. In this
case the test runner uses the specific incoming request, and then checks the
request made to the service itself is as expected. It then replays the
appropriate response to F1 and checks the outgoing F1 response is as expected.
Corpus Contents
---------------
Each directory may have the following files:
* meta.json - currently unused.
* f1-request.json - a file which holds the json body of an incoming request
to F1. If this file doesn't exist, a "simple" request is synthesized which
is useful for the "unexpected service errors" tests described above.
* request-n - where n is an integer. These correspond to the requests we
expect F1 to make on the external service. For example, if the
f1-request.json file specifies a direct message on twitter, request-0 will
hold the request F1 should make to the twitter directmessage API. This
is checked by the test runner. If no request-n file exists, the test runner
doesn't check the request at all - it just returns the appropriate response.
* response-n - where n is an integer. This corresponds to the request-n file
above. This is the response from the external service which the test runner
returns to the F1 code. For the example above, this would be the response
F1 gets from twitter after a successful direct message.
* expected-f1-response.json - a json file which can describe a full F1
response including header values and response code.
* expected-f1-data.json - only used if 'expected-f1-response.json' does not
exist. Holds only the response body portion of F1.
The tests always check the final F1 response from the playback is as specified
in the 'expected-f1-*' files.