Monday, 1 September 2014

Floods

In 2007 large parts of Hull and surrounding areas were flooded. The reasons why are still being debated. Drainage capacity, pumping capacity, pumping failures and poor coordination of various agencies have been blamed in reports.

Most flooding in the UK in the past has been either due to rivers bursting their banks or coastal flooding. Coastal flooding along the east coast is often somewhat predictable as it is often the combination of a spring tide, a deep depression and winds from the north or north west blowing down the North Sea. The low pressure allows the sea level to rise, the wind blows the water into the bottle-neck of the southern North Sea and combined with a spring tide the winds blow waves over sea defences which quickly floods land behind the defences.

Rivers bursting their banks is a bit more obvious. Heavy rainfall runs off land and fills water courses, which may burst their banks downstream and flood surrounding land. What we are beginning to see in Britain are more cases of another kind of flooding: direct flooding from heavy rainfall. Here saturated ground and impermeable ground cannot hold any more rainfall, so water sweeps across the surface of land to a low point where it forms a temporary flood, as it did in Hull in 2007. Britain's existing drainage infrastructure tries to deal with this using ditches and other water courses but they are now unable to cope more and more often.

I think more concentrated, heavier rainfall is causing part of this, some of which is down to climate change no doubt. Some of the flooding is man-made in other ways too. Ditches and other water courses are badly maintained and even being filled in. Building causes more run off from roofs, roads and other impermeable surfaces.

One decision made, I think, in the 1950s is also having a devastating effect and that is where any land run off water goes. In most cases the water boards of the 1950 decided to direct run off into sewers. At the time most areas with access the coast simply dumped untreated sewage into the sea and land run off was seen as a way to dilute it. Now all sewage is supposed to be treated before discharge. I said 'supposed to be treated' because the Water Act allows water companies some discretion to pump untreated sewage into the sea when their infrastructure is overwhelmed in an excessive rainfall event. At Bridlington a new pipe has been built for this purpose.

Adding relatively clean land run off to sewage has two big effects. Land run off water ends up getting treated as sewage in expensive treatment works. This water could reasonably be discharged directly into natural water courses or the sea without much problem, instead sewage works with greatly inflated capacity have to be built and run to deal with it. The second big effect is that when unusually heavy rainfall events overwhelm the drainage system, the bottlenecks are the sewage pipes which overflow, putting not just land run off into streets, gardens and homes but sewage too. Fixing this is not easy and will be very, very expensive and disruptive but as climate change progresses some changes will be needed.

I have been helping Cottingham Flood Action Group to understand the drainage and sewer network for that low-lying village by using their knowledge and surveys to add the ditches, drains, culverts, sewers and manholes to OSM and produce a map to help them understand the issues. You can see the map here. So far only a small part of the sewer network of the village has been added but various people in the group have enough knowledge to add the whole network I think. Then the onward link to Hull's network needs adding, which is a much bigger task. All of the network ends up in a single treatment works at the east of Hull at Salt End.

I hope CFAG find my map useful.

Monday, 2 June 2014

Local radio

Just got a plug for OSM on BBC Radio Humberside. Maybe there's scope for more info there ...

Wednesday, 16 April 2014

Images to map overlays

I have been working on a project that needs maps to make sense of it, more of that in a later post. It is a history project for my village so I wanted to overlay maps from the 19th century and early 20th century with the modern map. The modern map is easy, I know a good contemporary map I can use. For the historical map layers I need maps laid out as tiles so I can use Leaflet to display them.

I was given a scanned map of the village dated 1824 and found another set of maps dated 1910. All of these are out of copyright, so I can comfortably use them. Scans of the 1910 maps and a lot of fiddling and joining gives me a .jpg file for the village. Now the two scans need aligning to be the same projection as the OSM map.

I chose to use Mapwarper to rectify the scans to match OSM. The process is straightforward. I uploaded the .jpg file and the site overlays it on the OSM map. You can add control points on the uploaded image and matching ones on the OSM map. The more control points you add, the better the final alignment. I used road junctions mostly as the control points, though the 1824, pre-enclosure map has far fewer roads and I had to make the most of what I could find. The image is then rectified and a GeoTIFF is available to download. A GeoTIFF is a bitmap with georeferencing information added. Once this has been downloaded it can be turned into tiles.

GDAL has a set of utilities to work with geo-data. One of these is gdal2tiles.py which is a python program to turn a GeoTIFF into a set of tiles. It creates TMS tiles, TMS stands for Tile Map Service which I think was intended to be a standard. The numbering of the Y-axis tiles is inverted compared to OSM tiles. It is easy to rename the tiles to match the OSM convention, but Leaflet (and OpenLayers) supports TMS and none-TMS layers and can use them interchangeably. 

Running gdal2tiles (e.g. gdal2tiles.py -z13-19 xxxx.tif tiledir) gives set of tiles from the GeoTIFF (xxxx.tif) for the zoom levels specified (13-19) and stores them in the directory specified (tiledir). These are now ready for use with leaflet.

I want to overlay the older maps on the modern map. All of these layers are opaque, so if the three layers are just stacked then only the last one will display as it will hide the other two. Leaflet lets you specify the opacity of a layer, so by altering that the details of each layer can be visible simultaneously.  These can then be used as the base to show an extra layer of detail, but more that another time.

I have created a simple page to show these layers. There are sliders to control the opacity. I spent a bit of time aligning the 1910 map and I'm fairly happy with the result. The 1824 map was a bit crude so I used fewer control points and the result is not as good. It is still interesting. I'm looking for any more maps of this era for my village.

Tuesday, 25 February 2014

Signed at last

I went out to buy some seeds for the allotment today. On the way home deliberately drove home down Hawthorn Avenue in Hull to see the point where Woodcock Street joins it.

I have written to Hull City Council traffic department a few times in the past about that junction. Woodcock Street runs from Hawthorn Avenue to St Georges Road. Hawthorn Avenue has a 30 mph and St Georges Road has a 20 mph limit. There was no speed limit sign at either end of Woodcock Street nor at any point along it, so from St Georges Road you would assume it to be a 20 mph road but from Hawthorn Avenue you would assume it to be a 30 mph road and if you drove onto St Georges Road you would also expect it to be 30 mph until you see a repeater for 20 mph.

Woodcock Street has been part of the substantial redevelopment in the Hawthorn Avenue area, much of which is still under way though Woodcock Street looks to be pretty complete. The council have put a cherry on it by erecting a 20 mph sign at the Hawthorn Avenue end to remove any doubt.

Well done Hull City Council, eventually.

Saturday, 1 February 2014

Visualising changes

When someone edits OSM their changes get rendered quickly so they can see their handiwork. That's really good feedback and is valuable in attracting new mappers. When it comes to checking what has changed in an area, just looking at the rendered map is rarely enough to spot any changes. Looking at changesets is the next option.

Changesets were introduced with the API version 0.6. They group together edits by a mapper made over a short period of time. They can be open for up to 24 hours, but changesets are normally closed within an hour of being created and often changesets are opened, edits uploaded to the API and the changeset closed within seconds. Changesets have a bounding box that covers the area the edits are made in. You can request a list of changesets from the API that cover a specific bounding box. Edits by a mapper are normally wrapped in a changeset that covers a small area, but some edits range across very large areas even across the whole world. These are known as big edits. These are usually some sort of mass-edit or bot edit. These will show up in a request for changesets even though no actual changes are made in the requested area.

Looking at changesets can be useful, but apart from trying to work out which of the so-called big edits to ignore, there's the bigger problem of knowing what has actually changed. If you see an added node or way that's easy, but the modified nodes or ways is a bit harder to understand and looking at some relations can be a long job to work out the changes. Has a node been moved? Has a way's nodes been moved? Has a way had nodes removed or added? Has a node or way had its tags changed? Is there some combination of all this?  Deleted nodes and ways are also a bit of a problem as they no longer render and seeing what has gone can be hard to visualise, especially as some people delete a way and add a new one to replace it, losing the way's history. What would be nice is to see a before and after view of a changeset.

I looked at the problem, initially for nodes and ways, and quickly realised that the way the API delivers information about ways makes things harder than you'd want. When you request the details of a way from the API it returns the way with attributes like ID, version, changeset etc, a list of tags and a list of nodes as node ids. This fine for the current state of the way. The nodes in the list are the most recent version of the node. You need the nodes to find the geometry of the way as longitude and latitude are only stored on nodes. As soon as you look at an older version of a way the list of nodes is suddenly not clear at all. Which version of each node does it refer to?

I think the timestamp on each API object can be used to work out which version of a node matches each version of a way. That assumes that I have, or download, all of the versions of every node in the area I'm interested in. Another idea is to store the data for the area I'm interested in from a snapshot, such as Geofabrik's, and apply every changeset for the area on from there. In that way I could store the geometry of the way with the way rather than in nodes, which sounds much more practical. I can then show the before and after view of every changeset, but only from the date of my snapshot and only if I apply every changeset. I'll think about this some more.

Monday, 9 December 2013

Storm surge

Last week the geography, planetary alignment and weather combined to cause misery to hundreds of families around Britain.

Tides around Britain naturally have a big range, the second highest tidal range in the world is in the Bristol Channel in Britain's south west. Tides vary depending on the alignment of the sun and moon, when the Sun and Moon are at the same side of the Earth (new moon) or opposite sides of the Earth (full moon) the tides are higher than when they are not in alignment. The highest tides are called spring tides, whatever time of year they occur.

The North Sea is roughly V-shaped, getting much narrower at the southern end. Last week a storm swept across the country, driving very strong winds from the north east down the North Sea coast forcing water in the sea towards the narrow part. The storm was, as usual, a deep low pressure system. With the low pressure over the sea, the water level will rise in a so-called storm surge.

The spring tide, the storm surge and the extra water blown down the North Sea created a lot of water pushing up against sea defences along the North Sea coast, particularly the southern end. In addition the waves thrown up by the stormy winds made topping the sea walls inevitable. Flooding followed.

Earlier this year I surveyed a new, small road close to the Humber Bridge, Wintersgill Place. Sadly the road was flooded. The houses now look finished, but there are three for-sale signs and one sold sign for the six houses. I wonder just how planning permission was granted for these houses when the area is clearly a flood risk. Now the small field next to these new houses is also set to be developed.

I tried to look at the local council's forward planning map to see if they agree that the area is a flood risk. The map was curiously off-line over the period of the storm and just after. Now it has lost the most detailed zoom level and the newly built houses are not on the map at all. I hope the planners have better information available, but since they have allowed houses to be built that have flooded before they were even sold, maybe they don't.

Wednesday, 27 November 2013

Using Leaflet with a database

The previous two posts created a map with markers. The marker information was stored in a fixed geojson file. For the few markers that don't change much this is fine, but it would be much more flexible if the markers were in a database. If there are a large number of markers, say thousands, browsers might slow down showing them, even though many might not actually be visible. One way to help with this is to work out which markers would be visible in the current view and only show them. To do this we need to use some features of Leaflet and introduce Ajax. We will also need to store the marker information in a database, write some code to extract it and format it into the geojson format that we know works so well.

Ajax is a means of exchanging data between the client browser and the server without forcing a page reload. I tend to use jQuery to simplify the process of using Ajax and jQuery ensures that the process works on a wide range of browsers. We will request some data from the server with Ajax which can return data in a json format, which works with geojson too.

In the examples so far the files from the server have been simple files, not needing scripting or a database. In my examples I'm using PHP for script and MySQL for the database as this is a very common combination available from many hosts. In the GitHub repository there is a SQL file, plaques.sql, you can use to create a table called plaques in a MySQL database and import the same data that we have seen already.

To extract the data from the database we'll use a PHP script. It needs to receive a request for the bounding box and it will extract that, format the geojson result and return it to the client. The client then can display the markers. If the user scrolls the map or changes the zoom then a new Ajax request will get the markers that are in the new view and display them. This isn't really needed for the seventy or so markers in this example but it is very useful for a large number of markers.

Let's start with the PHP script to extract the data:


// uncomment below to turn error reporting on
ini_set('display_errors', 1);
error_reporting(E_ALL);

/*
 * ajxplaque.php
 * returns plaque points as geojson
 */

// get the server credentials from a shared import file
$idb= $_SERVER['DOCUMENT_ROOT']."/include/db.php";
include $idb;

if (isset($_GET['bbox'])) {
    $bbox=$_GET['bbox'];
} else {
    // invalid request
    $ajxres=array();
    $ajxres['resp']=4;
    $ajxres['dberror']=0;
    $ajxres['msg']='missing bounding box';
    sendajax($ajxres);
}
// split the bbox into it's parts
list($left,$bottom,$right,$top)=explode(",",$bbox);

// open the database
try {
    $db = new PDO('mysql:host=localhost;dbname='.$dbname.';charset=utf8', $dbuser, $dbpass);
} catch(PDOException $e) {
    // send the PDOException message
    $ajxres=array();
    $ajxres['resp']=40;
    $ajxres['dberror']=$e->getCode();
    $ajxres['msg']=$e->getMessage();
    sendajax($ajxres);
}

//$stmt = $db->prepare("SELECT * FROM hbtarget WHERE lon>=:left AND lon<=:right AND lat>=:bottom AND lat<=:top ORDER BY targetind");
//$stmt->bindParam(':left', $left, PDO::PARAM_STR);
//$stmt->bindParam(':right', $right, PDO::PARAM_STR);
//$stmt->bindParam(':bottom', $bottom, PDO::PARAM_STR);
//$stmt->bindParam(':top', $top, PDO::PARAM_STR);
//$stmt->execute();


try {
    $sql="SELECT plaqueid,lat,lon,plaquedesc,colour,imageid FROM plaques WHERE lon>=:left AND lon<=:right AND lat>=:bottom AND lat<=:top";
    $stmt = $db->prepare($sql);
    $stmt->bindParam(':left', $left, PDO::PARAM_STR);
    $stmt->bindParam(':right', $right, PDO::PARAM_STR);
    $stmt->bindParam(':bottom', $bottom, PDO::PARAM_STR);
    $stmt->bindParam(':top', $top, PDO::PARAM_STR);
    $stmt->execute();
} catch(PDOException $e) {
    print "db error ".$e->getCode()." ".$e->getMessage();
}
   
$ajxres=array(); // place to store the geojson result
$features=array(); // array to build up the feature collection
$ajxres['type']='FeatureCollection';

// go through the list adding each one to the array to be returned   
while ($row = $stmt->fetch(PDO::FETCH_ASSOC)) {
    $lat=$row['lat'];
    $lon=$row['lon'];
    $prop=array();
    $prop['plaqueid']=$row['plaqueid'];
    $prop['plaquedesc']=$row['plaquedesc'];
    $prop['colour']=$row['colour'];
    $prop['imageid']=$row['imageid'];
    $f=array();
    $geom=array();
    $coords=array();
   
    $geom['type']='Point';
    $coords[0]=floatval($lon);
    $coords[1]=floatval($lat);
   
    $geom['coordinates']=$coords;
    $f['type']='Feature';
    $f['geometry']=$geom;
    $f['properties']=$prop;

    $features[]=$f;
   
}
   
// add the features array to the end of the ajxres array
$ajxres['features']=$features;
// tidy up the DB
$db = null;
sendajax($ajxres); // no return from there

function sendajax($ajx) {
    // encode the ajx array as json and return it.
    $encoded = json_encode($ajx);
    exit($encoded);
}
?>



This is called ajxplaques.php in the folder ajax on the server, available in the GitHub repository.  The script needs a query string with bbox= in it. This defines the west,south,east and north longitude and latitude that bounds the current view of the map. It then queries the database for these items and returns the geojson of these limited markers. If the bounding box (BBOX) is big enough then all the markers will be returned and if the BBOX contains no markers then none are returned and that is fine too. I'm using MySQL and ignoring GIS functions as selecting points is quick and easy. If I was extracting polygons and using a powerful GIS database such as PostrgreSQL with the PostGIS extension then I would consider using a GIS function to find the polygons that intersect the BBOX.

To call the script from the JavaScript (example3.js) I use the ajax functions that are part of jQuery:

function askForPlaques() {
    var data='bbox=' + map.getBounds().toBBoxString();
    $.ajax({
        url: 'ajax/ajxplaques.php',
        dataType: 'json',
        data: data,
        success: showPlaques
    });
}


This creates the query string by using map.bounds() and formats into the format we need with toBBoxString(). The $.ajax() function uses the query string, requests json (of which geojson is just a special case) and will call the function showPlaques() when the data is returned.

function showPlaques(ajxresponse) {
    lyrPlq.clearLayers();
    lyrPlq.addData(ajxresponse);
}


The showPlaques() function is called when the data is returned from the script. The geojson data is in the ajxresponse. We delete all of the existing markers with clearLayers() and add the new data to the geojson layer. To trigger this process we need to call askForPlaques() every time the view of the map changes. We can ask the map object to trigger an event whenever this occurs. So after the map is displayed we add

map.on('moveend', whenMapMoves);

This calls the function whenMapMoves() when the event is triggered. That function simply calls  askForPlaques() to get the correct data for the view.

Two more things have changed. Firstly, when the geojson layer is created no data is added - it is called with null - so the plaques.js is not used at all. When the map is first displayed we need to call askForPlaques() once to get the first set of markers before the map is moved.

Now we have a much more dynamic map, using data from a database and potentially using a part of thousands of markers without overloading the browser.