Rainchasers Developer API v1

Rainchasers Dataset by rainchasers.com is licensed under a Creative Commons Attribution 3.0 Unported License.

The Rainchasers API gives you access to the full Rainchasers dataset to use as you wish. If you're looking for a quick & simple way to embed Rainchasers content onto your website, you'll want to consider using Rainchasers Widgets.

Discovering the API

The Rainchasers API is available over HTTP on the domain api.rainchasers.com. All data is sent and received as JSON. A normal API response is:

HTTP/1.1 200 OK

{
 "status": 200,
 "meta": { ... },
 "data": [ ... ]  
 }

status is identical to the HTTP status code, 200 meaning OK. All information in meta is also available in the HTTP response headers: pagination links, etc. In the data element is the requested data.

Client Errors

Requesting a non-existing endpoint, using invalid data or HTTP methods will result in an API error response. The problem will be indicated by the HTTP response code with further details in the response body.

e.g. api.rainchasers.com/v1/river?ts=abcd

HTTP/1.1 400 Bad Request

{
 "status": 400,
 "meta": { "mediatype":"vnd.rainchasers-error" },
 "data": {
  "error":"INVALID:TS",
  "message":"ts parameter must be a unix timestamp; provided value 'abcd' is not"
 }
}

Rate Limits

API requests are rate limited by IP to an average of 1 request/s, with higher rate bursts permitted for data resync. If the request rate is exceeded, the API will increase the response time to meet the rate limits. If you encounter long-runnng requests, it is likely you are being rate-limited.

User Agent Required

Please add some kind of identification to the User-Agent string from the client you are using, so we know who to contact if there are problems.

Cross Origin Resource Sharing

The API supports CORS for AJAX requests.

$ curl -I http://api.rainchasers.com
HTTP/1.1 200 OK
Access-Control-Allow-Origin: *

Hypermedia Links

The API is discoverable, with endpoints providing relevant links to related resources via the Link: header, and parsed into an easily consumable format in the JSON meta.links object. Some common link keys are:

next partial data is being provided, and a subsequent request should be made to this URL
resume the data is subject to change, this URL can be used later to retrieve any changes

Reading River & Level Data

The core Rainchasers dataset is paddleable river sections, each with a putin and takeout. The API is designed to allow your application to create a local cache of these rivers, and regularly poll for updates (e.g. once every 15 mins).

River Data Fields

uuid unique identifier
url Rainchasers site URL
river river name; is unique per physical river, convention to disambiguate is to append nearby placename in brackets. e.g. 'Dulais (Llanwrda)', 'Dulais (Halfway)', 'Dulais (Neath)'.
section name for this section of the river: usually there are multiple paddleable sections per river
km length of this section in km
grade specifies the paddling grade of the river (0-6). grade.text is a textual representation; grade.value is the overall section grade as a float (e.g. 4.5); grade.max is the highest grade on the river (this is used when a river has only one or two drops with a significantly higher grade than the rest).
desc short (single paragraph) description of the river
directions brief description of how to reach the putin and takeout
position an array of relevant points on the run. position[i].type indicates the point type (putin or takeout at this time), and position[i].lat and position[i].lng the WGS84 latitude and longitude.
state the last known state of this run. If null the river is uncalibrated and has an unknown state. state.time is the unix timestamp of the last point that a relevant gauge was measured; state.value is the normalised gauge value (0 is empty, 1 is huge); and state.source gives details of the actual gauge reading (the value state.source.value, the gauge name state.source.name, and source type state.source.type).

Creating a Local Cache

To download the entire river list from scratch a sequence of requests are necessary, 'paging' over the dataset downloading it in chunks. The API endpoint to download river data is http://api.rainchasers.com/v1/river

HTTP/1.1 200 OK
Link:<http://api.rainchasers.com/v1/river?ts=1357715085>; rel="next"

{
 "status": 200,
 "meta": {
   "link": {
     "next":"http:\/\/api.rainchasers.com\/v1\/river?ts=1357715085"
    }, ...
 }
 "data": [ ... ]  
}

Note how a link to the next page is given in the Link: header, and in the JSON response meta.link.next element. If this element/header is present, it means there is another page of data which your application needs to continue to request. If it is not present, it means that was the last 'page' of data, and your application has the latest dataset.

Examples of creating a local river cache in node and PHP:

Keeping your Cache Updated

You determine the last 'page' of data from a full cache refresh by the absense of the next link: either in the Link: header, or the element meta.link.next. There is no 'next' data because your dataset is complete. However, your dataset will not remain complete, as edits are made to existing rivers, rivers added or deleted, and (most often) as the last know state of a river changes. To allow you to keep your dataset updated, on the last page of data the resume link is present instead of the next element. This can be parsed from the Link: header, or found in the element meta.link.resume. When you next want to update your cache, use this resume link to retrieve changed or new rivers since you last updated your cache. Hitting this URL is similar to resyncing your cache, but only provides changed data (you may have to page using next links, and will result in a new resume URL).

Handling DELETEs

Rivers are deleted, but rarely. Using the mechanism described to create a local cache, and poll for updates using resume URLs (e.g. every 15 mins) will result in a full dataset which is kept updated. However, NO messages are provided to signal a river delete, so your application cache can end up containing 'orphaned' rivers that have been deleted from Rainchasers.

To resolve this issue, it is recommended to periodically perform a full cache resync (clearing your local cache and rebuilding from http://api.rainchasers.com/v1/river). For example, this could be performed in the background by your application every week. You will sometimes have stale deleted data for a short period, but river deletes are a rare occurrence this should not be too problematic.

Ad-hoc River Data Query

We'd recommend you utilise a local data cache where possible, but some applications will require ad-hoc queries for individual river data. For example, you may want to use the open CORS nature of this API to embed some river data directly into a webpage, letting the client browser call the API directly via javascript.

An ad-hoc query for individual river name can be constructed using the UUID of the river: e.g. the URL to retrieve data for the Colwyn is http://api.rainchasers.com/v1/river/aedc77e4-7bac-4ad0-8dce-3c39e0a8ab00. The data is returned in the format described above for building a local cache, but just for this river.

River Search Query

With a local river cache, you could develop your own search engine acros your local data. An alternative is to use the existing Rainchasers search engine. Adding a search query in a q parameter on the normal river sync endpoint results in a search instead. For example to search for 'Dee': http://api.rainchasers.com/v1/river?q=Dee. The response from a search query will either be an array of river data, or a indication that the client needs to geocode part of the search query. For example, if I want to search for grade 3 rivers near Llangollen, http://api.rainchasers.com/v1/river?q=grade%203%20Llangollen:

A 202 response from the API like above indicates that your application needs to geocode the supplied data.geocode string (e.g. with Google geocoding API). Once geocoded, you need to replace the string in the search string with "lat,lng". For example, Llangollen is approx at latitude 52.96 and longitude -3.20 so the search would be repeated as http://api.rainchasers.com/v1/river?q=grade%203%2052.96,-3.20.

Note that in any search around a point, an extra element proximity is included in the result, this is the distance in km that the river is from the point.

Manipulating User Data

Users interact with Rainchasers via their facebook identity, at present to add comments to rivers or to manage email level alerts for particular runs. Since the user authenatication is based on Facebook identity, you can use Facebook login integration to allow the user to perform these actions within your application.

Retrieving River Comments

All user comments against river can be retrieved via the API endpoint http://api.rainchasers.com/v1/river/_uuid_/comment where _uuid is the relevant river UUID. For example, comments against a section of the Tees are at http://api.rainchasers.com/v1/river/bfbb0571-748f-4a7b-af7d-4673873cbfd3/comment

uuid unique identifier for this comment
time unix timestamp when the comment was made
author full name of the author
text text content of the actual comment

This endpoint will return up to a maximum of the last 20 comments, with the most recent first.

Adding a Comment

To add a comment, the user must be authenticated via Facebook and your application must have a Facebook oAuth access token for that user, and their facebook UID. To post a comment, your application needs to issue a POST request to the API endpoint http://api.rainchasers.dev/v1/user/_uid_/comment (where _uid_ is the user's Facebook UID).

POST http://api.rainchasers.dev/v1/user/680944607/comment
Content-type:application/x-www-form-urlencoded
Authorisation:BAADC....AZDZD

text=This%20is%20a%20test
&target=bfbb0571-748f-4a7b-af7d-4673873cbfd3
&uuid=90c8b25d-4b52-4a14-bb43-d16682c67041

Level Email Alerts

To be documented shortly; the Rainchasers website uses the API via CORS to add/delete so if you're keen you should be able to reverse-engineer fairly easily.