STREST (Streaming and REST) Is a protocol which is meant to be simple, easy to understand and simple to implement both server and clientside. The protocol allows you to write controllers that can serve both normal REST request and response pairs and higher throughput STREST requests without any additional code. But the super awesomeness comes with the asynchronous nature, which allows easy firehose type streams as well as event callbacks.
The protocol itself is similar to regular HTTP packets, but messages are typically JSON encoded for easy parsing. STREST utilizes a long running duplex connection where requests are all handled asynchronously. This allows it to be very high throughput (i.e. it is possible to completely saturate a network using a single connection). Because the packet is similar to a HTTP packet it is possible to route it on the server side so that the same code can be used in STREST or regular REST.
- Controllers can service REST and STREST requests
- High throughput for request response methods
- Easy firehose streams
- Asynchronous
- Compatible with standard authentication methods(HTTP Auth, OAuth, SSL)
- Fast
The protocol is very simple. A single long lived connection passes STREST packets back and forth. Each STREST packet contains some transaction headers, but otherwise contains the same data as an HTTP request.
A transaction is initiatiated by the client. A transaction can be long lived, but cannot span connections. Using transaction ids to match requests with responses allows us to fully utilize the bandwidth as the pipe never needs to sit idle while a slow request finishes (i.e. responses can be sent out of sequence).
A transaction for a typical REST call is short lived, it ends as soon as the server sends a response. Transactions can be long lived though, and could span multiple method invocations. Transactions can only be initiated by the client.
{ "strest" : { "v" : 2.0, //the strest protocol version (number) "user-agent" : "java-strest 1.0" "txn" : { "id" : "t13", //a unique id for every txn "accept" : "multi" //(single, or multi)the reciever is willing to accept multiple results for this request }, "uri" : "/v1/rest/endpoint", //the endpoint "method" : "GET", //GET, POST, PUT, or DELETE "params" : { "param1" : 12 }, //map of parameters to pass to the controller (optional), these can also be passed with the uri in standard http fashion "shard" : { //used only for shard requests, see goshire-shards project "partition" : -1, //the partition -1 as default "key" : , //the key to partition from. either partition or key is necessary for routing "revision" : 0, //the router table revision, typically not necessary } } }
strest.txn.id => A unique sting id generated by the client. Allows clients to easily handle multiple asynch requests on the client side. Ids need to be unique per connection. A simple atomic integer counter is appropriate.
strest.txn.accept => sent by client, (single, multi) defaults to ‘single’. If ‘multi’ then server is allowed to send multiple return packets (i.e. streaming/firehose connection) .
{ "status" { "code" : 200, "message" : "Success" }, "strest" : { "txn" : { "id" : "t13", "status" : "continue", //completed or continue }, "v" : 2.0 }, "data" : "some data here", "moredata" : ["more","even more"] }
strest.txn.id => This is the unique id generated by the client. Server simply returns the same txn id as contained in the request.
strest.txn.status => returned by the server. (completed, continue). ‘continue’ will indicate that more responses should be expected. if strest.txn.accept from client is ‘single’ then this should always be ‘completed’.
STREST works perfectly with websockets. Each strest json packet is sent in a websocket frame. There is a client side driver available