Planning Alerts: community scrapers ahoy!

This post is by Andrew Speakman, who’s coordinating OpenlyLocal‘s planning application work.

We can now report good progress on our plan to develop community scrapers to underpin the new incarnation of PlanningAlerts.com. The plan applies to those councils that use non-standard planning systems and our latest estimate is that there are around 100 of these sites.

There are now 13 successful working scrapers with more on the way and some of the £75 bounties have already been paid out.  The data from these scrapers is being regularly uploaded into OpenlyLocal using the Scraperwiki API, and you can see the results here with Crawley Council:

 

Crawley Council Planning Applications

 

The list of the authorities being scraped by this method is as follows:

This is still very much a work in progress, and those in the above list that we’ve linked to are all running well and helping collect up-to-date planning applications with locations, and as soon as we turn on the alerts system (currently being tested), will start sending out email alerts.

There are also some councils that although we’re importing the data for we won’t yet be able to send alerts for – an example is Wokingham from the above list – because they do not include postcodes in the planning application details and our location coding is based on postcodes to ensure our data is fully open. If anyone from authorities such as Wokingham wants to rectify this situation, we’re more than happy to work with them.

If you want to get involved in helping scrape the UK’s planning data and building an open database of planning applications for the whole of the UK, contact me at planning@openlylocal.com. Further details about data fields and the available planning authorities are defined in this shared Google spreadsheet.

Happy scraping.

Andrew


How to help build the UK’s open planning database: writing scrapers

This post is by Andrew Speakman, who’s coordinating OpenlyLocal’s planning application work.

As Chris wrote in his last post announcing OpenlyLocal’s progress in building an open database of planning applications, while we can do the importing from the main planning systems, if we’re really going to cover the whole country, we’re going to need the community’s help. I’m going to be coordinating this effort and so I thought it would be useful to explain how we’re going to do this (you can contact me at planning@openlylocal.com).

First, we’re going to use the excellent ScraperWiki as the main platform for writing external scrapers. It supports Python, Ruby and PHP, and has worked well for similar schemes. It also means the scraper is openly available and we can see it in action. We will then use the Scraperwiki API to upload the data regularly into OpenlyLocal.

Second, we’re going to break the job into manageable chunks by focus on target groups of councils, and just to sweeten things – as if building a national open database of planning applications wasn’t enough ;-) – we’re going to offer small bounties (£75) for successful scrapers for these councils.

We have some particular requirements designed to make the system maintainable, and do things the right way, but not many are fixed in stone, so feel free to respond with suggestions if you want to do it in a different way.

For example, the scraper should keep itself current (running on a daily basis), but also behave nicely (not putting an excessive load on Scraperwiki or the target website by trying to get too much data in one go). In addition we propose that the scrapers should operate by updating current applications on a daily basis and also make inroads into the backlog by gathering a batch of previous applications.

We have set up three example scrapers that operate in the way we expect: Brent (Ruby), Nuneaton and Bedworth (Python) and East Sussex (Python). These scrapers perform 4 operations, as follows:

  1. Create new database records for any new applications that have appeared on the site since the last run and store the identifiers (uid and url).
  2. Create new database records of a batch of missing older applications and store the identifiers (uid and url). Currently the scrapers are set up to work backwards from the earliest stored application towards a target date in the past
  3. Update the most current applications by collecting and saving the full application details. At the moment the scrapers update the details of all applications from the past 60 days.
  4. Update the full application details of a batch of older applications where the uid and url has been collected (as above) but the application details are missing. At the moment the scrapers work backwards from the earliest “empty” application towards a target date in the past

The data fields to be gathered for each planning application are defined in this shared Google spreadsheet. Not all the fields will be available on every site, but we want all those that are there.

Note the following:

  • The minimal valid set of fields for an application is: ‘uid’, ‘description’, ‘address’, ‘start_date’ and ‘date_scraped’
  • The ‘uid’ is the database primary key field
  • All dates (except date_scraped) should be stored in ISO8601 format
  • The ‘start_date’ field is set to the earliest of the ‘date_received’ or ‘date_validated’ fields, depending on which is available
  • The ‘date_scraped’ field is a date/time (RFC3339) set to the current time when the full application details are updated. It should be indexed.

So how do you get started? Here’s a list of 10 non-standard authorities that you can choose from. Aberdeen, Aberdeenshire, Ashfield, Bath, Calderdale, Carmarthenshire, Consett, Crawley, Elmbridge, Flintshire. Have a look at the sites and then let me know if you want to reserve one and how long you think it will take to write your scraper.

Happy scraping.


Planning Alerts: first fruits

PlanningAlerts is coming soon

Well, that took a little longer than planned…

[I won't go into the details, but suffice to say our internal deadline got squeezed between the combination of a fast-growing website, the usual issues of large datasets, and that tricky business of finding and managing coders who can program in Ruby, get data, and be really good at scraping tricky websites.]

But I’m pleased to say we’ve now well on our way to not just resurrecting PlanningAlerts in a sustainable, scalable way but a whole lot more too.

Where we’re heading: a open database of UK planning applications

First, let’s talk about the end goal. From the beginning, while we wanted to get PlanningAlerts working again – the simplicity of being able to put in your postcode and email address and get alerts about nearby planning applications is both useful and compelling – we also knew that if the service was going to be sustainable, and serve the needs of the wider community we’d need to do a whole lot more.

Particularly with the significant changes in the planning laws and regulations that are being brought in over the next few years, it’s important that everybody – individuals, community groups, NGOs, other websites, even councils – have good and open access to not just the planning applications in their area, but in the surrounding areas too.

In short, we wanted to create the UK’s first open database of planning applications, free for reuse by all.

That meant not just finding when there was a planning application, and where (though that’s really useful), but also capturing all the other data too, and also keep that information updated as the planning application went through the various stages (the original PlanningAlerts just scraped the information once, when it was found on the website, and even then pretty much just got the address and the description).

Of course, were local authorities to publish the information as open data, for example through an API, this would be easy. As it is, with a couple of exceptions, it means an awful lot of scraping, and some pretty clever scraping too, not to mention upgrading the servers and making OpenlyLocal more scalable.

Where we’ve got to

Still, we’ve pretty much overcome these issues and now have hundreds of scrapers working, pulling the information into OpenlyLocal from well over a hundred councils, and now have well over half a million planning applications in there.

There are still some things to be sorted out – some of the council websites seem to shut down for a few hours overnight, meaning they appear to be broken when we visit them, others change URLs without redirecting to the new ones, and still others are just, well, flaky. But we’ve now got to a stage where we can start opening up the data we have, for people to play around with, find issues with, and start to use.

For a start, each planning application has its own permanent URL, and the information is also available as JSON or XML:

There’s also a page for each council, showing the latest planning applications, and the information here is available via the API too:

There’s also a GeoRSS feed for each council too allowing you to keep up to date with the latest planning applications for your council. It also means you can easily create maps or widgets for the council, showing the latest applications of the council.

Finally, Andrew Speakman, who’d coincidentally been doing some great stuff in this area, has joined the team as Planning editor, to help coordinate efforts and liaise with the community (more on this below).

What’s next

The next main task is to reinstate the original PlanningAlert functionality. That’s our focus now, and we’re about halfway there (and aiming to get the first alerts going out in the next 2-3 weeks).

We’ve also got several more councils and planning application systems to add, and this should bring the number of councils we’ve got on the system to between 150 and 200. This will be an ongoing process, over the next couple of months. There’ll also be some much-overdue design work on OpenlyLocal so that the increased amount of information on there is presented to the user in a more intuitive way – please feel free to contact us if you’re a UX person/designer and want to help out.

We also need to improve the database backend. We’ve been using MySQL exclusively since the start, but MySQL isn’t great at spatial (i.e. geographic) searches, restricting the sort of functionality we can offer. We expect to sort this in a month or so, probably moving to PostGIS, and after that we can start to add more features, finer grained searches, and start to look at making the whole thing sustainable by offering premium services.

We’ll be working too on liaising with councils who want to offer their applications via an API – as the ever pioneering Lichfield council already does – or a nightly data dump. This not only does the right thing in opening up data for all to use, but also means we don’t have to scrape their websites. Lichfield, for example, uses the Idox system, and the web interface for this (which is what you see when you look at a planning application on Lichfield’s website) spreads the application details over 8 different web pages, but the API makes this available on a single URL, reducing the work the server has to do.

Finally, we’re going to be announcing a bounty scheme for the scraper/developer community to write scrapers for those areas that don’t use one of the standard systems. Andrew will be coordinating this, and will be blogging about this sometime in the next week or so (and you can contact him at planning at openlylocal dot com). We’ll also be tweeting progress at @planningalert.

Thanks for your patience.


Open data, fraud… and some worrying advice

One of the most commonly quoted concerns about publishing public data on the web is the potential for fraud – and certainly the internet has opened up all sorts of new routes to fraud, from Nigerian email scams, to phishing for bank accounts logins, to key-loggers to indentity theft.

Many of these work using two factors – the acceptance of things at face value (if it looks like an email from your bank, it is an email from them), and flawed processes designed to stop fraud but which inconvenience real users while making life easy from criminals.

I mention this because of some pending advice from the Local Government Association to councils regarding the publication of spending data, which strikes me as not just flawed, but highly dangerous and an invitation to fraudsters.

The issue surrounds something that may seem almost trivial, but bear with me – it’s important, and it’s off such trivialities that fraudsters profit.

In the original guidance for councils on publishing spending data we said that councils should publish both their internal supplier IDs and the supplier VAT numbers, as it would greatly aid the matching of supplier names to real-world companies, charities and other organisations, which is crucial in understanding where a local council’s money goes.

When the Local Government Association published its Guidance For Practitioners it removed those recommendations in order to prevent fraud. It has also suggested using the internal supplier ID as a unique key to confirm supplier identity. This betrays a startling lack of understanding, and worse opens up a serious vector to allow criminals to defraud councils of large sums of money.

Let’s take the VAT numbers first. The main issue here appears to be so-called missing trader fraud, whereby VAT is fraudulently claimed back from governments. Now it’s not clear to me that by publishing VAT numbers for supplier names that this fraud is made easier, and you would think the Treasury who recommend publishing the VAT numbers for suppliers in their guidance (PDF) would be alert to this (I’m told they did check with HMRC before issuing their guidance).

However, that’s not the point. If it’s about matching VAT numbers to supplier names there’s already several routes for doing this, with the ability to retrieve tens of thousands of them in the space of an hour or so, including this one:

http://www.google.co.uk/#sclient=psy&hl=en&q=%27vat+number+gb%27+site:com

Click on that link and you’ll get something like this:

Whether you’re a programmer or not, you should be able to see that it’s a trivial matter to go through those thousands of results and extract the company name and VAT number, and bingo, you’ve got that which the LGA is so keen for you not to have. So those who are wanting to match council suppliers don’t get the help a VAT number would give, and fraudsters aren’t disadvantaged at all.

Now, let’s turn to the rather more serious issue of internal Supplier IDs. Let me make it clear here, when matching council or central government suppliers, internal Supplier IDs are useful, make the job easier, and the matching more accurate, and also help with understanding how much in total redacted payees are receiving (you’d be concerned if a redacted person/company received £100,000 over the course of a year, and without some form of supplier ID you won’t know that). However, it’s not some life-or-death battle over principle for me.

The reason the LGA, however, is advising councils not to publish them is much more serious, and dangerous. In short, they are proposing to use the internal Supplier ID as a key to confirm the suppliers identity, and so allow the supplier to change details, including the supplier bank account (the case brought up here to justify this was the recent one of South Lanarkshire, which didn’t involve any information published as open data, just plain old fraudster ingenuity).

Just think about that for a moment, and then imagine that it’s the internal ID number they use for you in connection with paying your housing benefits. If you want to change your details, say you wanted to pay the money into a different bank account, you’d have to quote it – and just how many of us would have somewhere both safe to keep it and easy to find (and what about when you separated from your partner).

Similarly, where and how do we really think suppliers are going to keep this ID (stuck on a post-it note to the accounts receivable’s computer screen?), and what happens when they lose it? How do they identify themselves to find out what it is, and how will a council go about issuing a new one should the old one be compromised – is there any way of doing this except by setting up a new supplier record, with all the problems that brings.

And how easy would it be to do a day or two’s temping in a council’s accounts department and do a dump/printout of all the Supplier IDs, and then pass them onto fraudsters. The possibilities – for criminals – are almost limitless, and the Information Commissioner’s Office should put a stop to this at once if it is not to lose a serious amount of credibility.

But there’s an bigger underlying issue here, and it’s not that organisations such as the LGA don’t get data (although that is a problem), it’s that such bodies think that by introducing processes they can engineer out all risk, and that leads to bad decisions. Tell someone that suppliers changing bank accounts is very rare and should always be treated with suspicion and fraud becomes more difficult; tell someone that they should accept internal supplier IDs as proof of identity and it becomes easy.

Government/big-company bureaucrats not only think like government/big-company bureaucrats, they build processes that assumes everyone else does. The problem is that that both makes more difficult for ordinary citizens (as most encounters with bureaucracy make clear), and also makes it easy for criminals (who by definition don’t follow the rules).


Opening up council accounts… and open procurement

Since OpenlyLocal started pulling in council spending data, it’s niggled at me that it’s only half the story. Yes, as more and more data is published you’re beginning to get a much clearer idea of who’s paid what. And if councils publish it at a sufficient level of detail and consistently categorised, we’ll have a pretty good idea of what it’s spent on too.

However, useful though that is, that’s like taking a peak at a company’s bank statement and thinking it tells the whole story. Many of the payments relate to goods or services delivered some time in the past, some for things that have not yet been delivered, and there are all sorts of things (depreciation, movements between accounts, accruals for invoices not yet received) that won’t appear on there.

That’s what the council’s accounts are for — you know, those impenetrable things locked up in PDFs in some dusty corner of the council’s website, all sufficiently different from each other to make comparison difficult:

For some time, the holy grail for projects like OpenlyLocal and Where Does My Money Go has been to get the accounts in a standardized form to make comparison easy not just for accountants but for regular people too.

The thing is, such a thing does exist, and it’s sent by councils to central Government (the Department for Communities and Local Government to be precise) for them to use in their own figures. It’s a fairly hellishly complex spreadsheet called the Revenue Outturn form that must be filled in by the council (to get an idea have a look at the template here).

They’re not published anywhere by the DCLG, but they contain no state secrets or sensitive information; it’s just that the procedure being followed is the same one as they’ve always followed, and so they are not published, even after the statistics have been calculated from the data (the Statistics Act apparently prohibit publication until the stats have been published).

So I had an idea: wouldn’t it be great if we could pull the data that’s sitting in all these spreadsheets into a database and so allow comparison between councils’ accounts, thus freeing it from those forgotten corners of government computers.

This would seem to be a project that would be just about simple enough to be doable (though it’s trickier than it seems) and could allow ordinary people to understand their council’s spending in all sorts of ways (particularly if we add some of those sexy Where Does My Money Go visualisations). It could also be useful in ways that we can barely imagine  – some of the participatory budget experiments going in on in Redbridge and other councils would be even more useful if the context of similar councils spending was added to the mix.

So how would this be funded. Well, the usual route would be for DCLG or perhaps the one of the Local Government Association bodies such as IDeA to scope out a proposal, involving many hours of meetings, reams of paper, and running up thousands of pounds in costs, even before it’s started.

They’d then put the process out to tender, involving many more thousands in admin, and designed to attract those companies who specialise in tendering for public sector work. Each of those would want to ensure they make a profit, and so would work out how they’re going to do it before quoting, running up their own costs, and inflating the final price.

So here’s part two of my plan, instead going down that route, I’d come up with a proposal that would:

  • be a fraction of that cost
  • be specified on a single sheet of paper
  • paid for only if I delivered

Obviously there’s a clear potential conflict of interest here – I sit on the government’s Local Public Data Panel and am pushing strongly for open data, and also stand to benefit (depending on how good I am at getting the information out of those hundreds of spreadsheets, each with multiple worksheets, and matching the classification systems). The solution to that – I think – is to do the whole thing transparently, hence this blog post.

In a sense, what I’m proposing is that I scope out the project, solving those difficult problems of how to do it, with the bonus of instead of delivering a report, I deliver the project.

Is it a good thing to have all this data imported into a database, and shown not just on a website in a way non-accountants can understand, but also available to be combined with other data in mashups and visualisations? Definitely.

Is it a good deal for the taxpayer, and is this open procurement a useful way of doing things? Well you can read the proposal for yourself here, and I’d be really interested in comments both on the proposal and the novel procurement model.


A simple demand: let us record council meetings

A couple of months ago we had the ridiculous situation of a local council hauling up one of their councillors in front of a displinary hearing for posting videos of the council meeting on YouTube.

The video originated from the council’s own webcasts, and the complaint by Councillor Kemble was that in posting these videos on YouTube, another councillor, Jason Kitcat

(i) had failed to treat his fellow councillors with respect, by posting the clips without the prior knowledge or express permission of Councillor Theobald or Councillor Mears; and
(ii) had abused council facilities by infringing the copyright in the webcast images

and in doing so had breached the Members Code of Conduct.

Astonishingly, the standards committee found against Kitcat and ruled he should be suspended for up to six months if he does not write an apology to Cllr Theobald and submit to re-training on the roles and responsibilities of being a councillor, and it is only the fact that he is appealing to the First-Tier Tribunal (which apparently the council has decided to fight using hire outside counsel) that has allowed him to continue.

It’s worth reading the investigator’s report (PDF, of course) in full for a fairly good example of just how petty and ridiculous these issues become, particularly when the investigator writes things such as:

I consider that Cllr Kitcat did use the council’s IT facilities improperly for political purposes. Most of the clips are about communal bins, a politically contentious issue at the time. The clips are about Cllr Kitcat holding the administration politically to account for the way the bins were introduced, and were intended to highlight what the he believed were the administration’s deficiencies in that regard, based on feedback from certain residents.
Most tellingly, clip no. 5 shows the Cabinet Member responsible for communal bins in an unflattering and politically unfavourable light, and it is hard to avoid the conclusion that this highly abridged clip was selected and posted for political gain.

The using IT facilities, refers, by the way, not to using the council’s own computers to upload or edit the videos (it seems agreed by all that he used his own computer for this), but the fact that the webcasts were made and published on the web using the council’s equipment (or at least those of its supplier, Public-i). Presumably it he’d taken an extract from the minutes of a meeting published on the council’s website that would also have been using the council’s IT resources.

However, let’s step back a bit. This, ultimately, is not about councillors not understanding the web, failing to get get new technology and the ways it can open up debate. This is not even about the somewhat restrictive webcasting system which apparently only has the past six month’s meetings and is somewhat unpleasant to use (particularly if you use a Mac, or Linux — see a debate of the issues here).

This is about councillors failing to understand democracy, about the ability to taking the same material and making up your own mind, and critically trying to persuade others of that view.

In fact the investigator’s statement above, taking “a politically contentious issue at the time… holding the administration politically to account for the way the bins were introduced… to highlight what the he believed were the administration’s deficiencies in that regard” is surely a pretty good benchmark for a democracy.

So here’s simple suggestion for those drawing up the local government legislation at the moment, no let’s make that a demand, since that’s what it should be in a democracy (not a subservient request to your ‘betters’):

Give the public the right to record any council meeting using any device using Flip cams, tape recorders, frankly any darned thing they like as long as it doesn’t disrupt the meeting.

Not only would this open up council meetings and their obscure committees to wider scrutiny, it would also be a boost to hyperlocal sites that are beginning to take the place of the local media.

And if councils want to go to the expense of webcasting their meetings, then require them to make the webcasts available to download under an open licence. That way people can share them, convert them into open formats that don’t require proprietary software, subtititle them, and yes, even post them on YouTube.

I can already hear local politicians saying it will reduce the quality of political discourse, that people may use it in ways they don’t like and can’t control.

Does this seem familiar? It should. It’s the same arguments being given against publishing raw data. The public won’t understand. There may be different interpretations. How will people use it?

Well, folks that’s the point of a democracy. And that’s the point of a data democracy. We can use it in any way we damn well please. The public record is not there to make incumbent councillors or senior staff memebers look good. It’s there to allow the to be held to account. And to allow people to make up their own minds. Stop that, and you’re stopping democracy.

Links: For more posts relating to this case, see also Jason Kitcat’s own blog postsBrighton Argus post, and posts form Mark Pack at Liberal Democrat voice, Jim Killock,  Conservative Home, and even a tweet from Local Government minister Grant Shapps.


Introducing OpenCharities: Opening up the Charities Register

A couple of weeks ago I needed a list of all the charities in the UK and their registration numbers so that I could try to match them up to the local council spending data OpenlyLocal is aggregating and trying to make sense of. A fairly simple request, you’d think, especially in this new world of transparency and open data, and for a dataset that’s uncontentious.

Well, you’d be wrong. There’s nothing at data.gov.uk, nothing at CKAN and nothing on the Charity Commission website, and in fact you can’t even see the whole register on the website, just the first 500 results of any search/category. Here’s what the Charities Commission says on their website (NB: extract below is truncated):

The Commission can provide an electronic copy in discharge of its duty to provide a legible copy of publicly available information if the person requesting the copy is happy to receive it in that form. There is no obligation on the Commission to provide a copy in this form…

The Commission will not provide an electronic copy of any material subject to Crown copyright or to Crown database right unless it is satisfied… that the Requestor intends to re-use the information in an appropriate manner.

Hmmm. Time for Twitter to come to the rescue to check that some other independently minded person hasn’t already solved the problem. Nothing, but I did get pointed to this request for the data to be unlocked, with the very recent response by the Charity Commission, essentially saying, “Nope, we ain’t going to release it”:

For resource reasons we are not able to display the entire Register of Charities. Searches are therefore limited to 500 results… We cannot allow full access to all the data, held on the register, as there are limitations on the use of data extracted from the Register… However, we are happy to consider granting access to our records on receipt of a written request to the Departmental Record Officer

OK, so it seems as though they have no intention of making this data available anytime soon (I actually don’t buy that there are Intellectual Property or Data Privacy issues with making basic information about charities available, and if there really are this needs to be changed, pronto), so time for some screen-scraping. Turns out it’s a pretty difficult website to scrape, because it requires both cookies and javascript to work properly.

Try turning off both in your browser, and see how far you get, and then you’ll also get an idea of how difficult it is to use if you have accessibility issues – and check out their poor excuse for accessibility statement, i.e. tough luck.

Still, there’s usually a way, even if it does mean some pretty tortuous routes, and like the similarly inaccessible Birmingham City Council website, this is just the sort of challenge that stubborn so-and-so’s like me won’t give up on.

And the way to get the info seems to be through the geographical search (other routes relied upon Javascript), and although it was still problematic, it was doable. So, now we have an open data register of charities, incorporated into OpenlyLocal, and tied in to the spending data being published by councils.

Charity supplier to Local authority

And because this sort of thing is so easy, once you’ve got it in a database (Charity Commission take note), there are a couple of bonuses.

First, it was relatively easy to knock up a quick and very simple Sinatra application, OpenCharities:

Open Charities :: Opening up the UK Charities Register

If there’s any interest, I’ll add more features to it, but for now, it’s just a the simplest of things, a web application with a unique URL for every charity based on its charity number, and with the  basic information for each charity is available as data (XML, JSON and RDF). It’s also searchable, and sortable by most recent income and spending, and for linked data people there are dereferenceable Resource URIs.

This is very much an alpha application: the design is very basic and it’s possible that there are a few charities missing – for two reasons. One: the Charity Commission kept timing out (think I managed to pick up all of those, and they should get picked up when I periodically run the scraper); and two: there appears to be a bug in the Charity Commission website, so that when there’s between 10 and 13 entries, only 10 are shown, but there is no way of seeing the additional ones. As a benchmark, there are currently 150,422 charities in the OpenCharities database.

It’s also worth mentioning that due to inconsistencies with the page structure, the income/spending data for some of the biggest charities is not yet in the system. I’ve worked out a fix, and the entries will be gradually updated, but only as they are re-scraped.

The second bonus is that the entire database is available to download and reuse (under an open, share-alike attribution licence). It’s a compressed CSV file, weighing in at just under 20MB for the compressed version, and should probably only attempted by those familiar with manipulating large datasets (don’t try opening it up in your spreadsheet, for example). I’m also in the process of importing it into Google Fusion Tables (it’s still churning away in the background) and will post a link when it’s done.

Now, back to that spending data.


New feature: search for information by postcode

Why was it important that the UK government open up the geographic infrastructure? Because it makes so many location-based things that were tortuous, almost trivial.

Previously, getting open data about your local councillors, given just a postcode, was a tortuous business, requiring multiple calls to different sites. Now, it is easy. Just go to http://openlylocal.com/areas/postcodes/%5Byourpostcodehere%5D and, bingo, you’re done.

You can also just put your postcode in the search box on any OpenlyLocal page to do the same thing. And, obviously, you can also download the data as XML or JSON, and with an open data licence that allows reuse by anybody, even commercial reuse.

There’s still a little bit of tweaking to be done. I need to match up postcodes county electoral divisions, and I’m planning on adding RDF to the data types returned. Finally, it’d be great to show the ward boundaries on a map, but I think that may take a little more work.


The GLA and open data: did he really say that?

The launch on Friday of the Greater London Authority’s open data initiative (aka London Datastore) was a curious affair, and judging from some of the discussions in the pub after, I think that the strangeness – a joint teleconferenced event with CES Las Vegas – possibly overshadowed its significance and the boldness of the GLA’s action.

First off the technology let it down – if Skype wanted to give a demo of just how far short its video conferencing is from prime time they did a perfect job. Boris did a great impromptu stand-up routine, looking for the world like he was still up from the night before, but the people at CES in Las Vegas missed the performance and whose images and words occasionally stuttered in to life to interrupt the windows/skype error messages.

However in between the gags Boris came out with this nugget, “We will open up all the GLA’s data for free reuse”.

What does that mean, I wondered, all their data? All that’s easy to do? Does it include info from TransportForLondon (TfL), the Metropolitan Police? To be honest I sort of assumed it was Boris just paraphrasing. Nevertheless, I thought, it could be a good stick to enforce change later on.

However then it was Deputy Mayor Sir Simon Milton’s turn to give the more scripted, more plodding, more coherent version. This was the bit where we would find out what’s really going to happen. [What you need to realise that the GLA doesn't actually have a lot of its own data - mostly it's just some internal stuff, slices of central government data, and grouping of London council info. The good stuff is owned by those huge bodies, such as TfL and the Met, that it oversees.

So when Steve said: "I hope that our discussions with the GLA group will be fruitful and that in the short term we can encourage them to release that data which is not tied to commercial contracts and in the longer term encourage them when these contracts come up for renewal to apply different contractual principles that would allow for the release of all of their data into the public domain", all I heard was yada yada yada.

The next bit, however, genuinely took me by surprise:

"I can confirm today, however, that as a result of our discussions around the Datastore, TfL are willing to make raw data available through the Datastore. Initially this will be data which is already available for re-use via the TfL website, including live feeds from traffic cameras, geo-coded information on the location of Tube, DLR and Overground stations, the data behind the Findaride service to locate licensed mini-cab and private hire operators and data on planned works affecting weekend Tube services.

"TfL will also be considering how best to make available detailed timetabling data for its services and welcomes examples of other data which could also be prioritised for inclusion in the Datastore such as the data on live departures and Tube incidents on TfL’s website"

So stunned was I in fact (and many others too) we that we didn't ask any questions when he finished talking came to it , or for that matter congratulate Boris/Simon on the steps they were taking.

Yes, it's nothing that hasn't been done in Washington DC or San Francisco, and it isn't as  big a deal as the Government's open data announcement on December 7 (which got scandalously little press coverage, even in the broadsheets, yet may well turn out to be the most important act of this government).

However it is a huge step for local government in the UK and sets a benchmark for other local authorities to attain, and for the GLA to have achieved what it already has with Transport for London will only have come after a considerable trial of will, and one, significantly, that they won.

So, Simon & Boris, and all those who fought the battle with TfL, well done. Now let's see some action with the other GLA bodies - the Met, London Development Agency, London Fire Brigade, he London Pensions Fund Authority in particular (I'm still trying to figure out its relationship to Visit London and the London Travel Watch).

Update: Video embedded below

Useful links:


Opening up Local Spending Reports on OpenlyLocal

As I mentioned in the last post, I’ve recently added council- and ward-level statistics to OpenlyLocal, using the data from the Office of National Statistics Neighbourhood Statistics database. All very well and nice to have it in the same place as the democratic info.

However, what I was really interested in was getting and showing statistics about local areas that’s a bit more, well, meaty. So when I did that statistical backend of OpenlyLocal I wanted to make sure that I could use it for other datasets from other sources.

The first of those is now online, and it’s a good one, the 2006-07 Local Spending Report for England, published in April 2009. What is this? In a nutshell it lists the spending by category for every council in England at the time of the report (there have been a couple of new ones since then).

Now this report has been available to download online if you knew it existed, as a pretty nasty and unwieldy spreadsheet (in fact the recent report to Parliament, Making local public expenditure data public and the development of Local Spending Reports, even has several backhanded references to the inaccessibility of it).

However, unless you enjoy playing with spreadsheets (and at the very minimum know how to unhide hidden sheets and read complex formulae), it’s not much use to you. Much more helpful, I think, is an accessible table you can drill down for more details.

Let’s start with the overview:

Overview of Local Spending by Council for England

Here you can see the total spending for each council over all categories (and also a list of the categories). Click on the magnifying glass at the right of each row and you’ll see a breakdown of spending by main category:

Local Spending breakdown for given council

Click again on the magnifying glass for any row now and you’ll see the breakdown of spending for the category of spending in that row:

Finally (for this part) if you click on the magnifying glass again you’ll get a comparison with councils of the same type (District, County, Unitary, etc) you can compare with other councils:

You can also compare between all councils. From the main page for the Local Spending Dataset, click on one of the categories and it will show you the totals for all councils. Click on one of the topics on that page and it will give you all councils for that topic. Well, hopefully you get the idea. Basically, have a play and give us some feedback.

[There'll also be a summary of the figures appearing on the front page for each council sometime in the next few hours.]

There’s no fancy javascript or visualizations yet (although we are talking with the guys at OKFN,  who do the excellent WhereDoesMyMoneyGo, about collaborating), but that may come. For the moment, we’ve kept it simple, understandable, and accessible.

Comments, mistakes found, questions all welcome in the usual locations (comments below, twitter or email at CountCulture at gmail dot com).



Follow

Get every new post delivered to your Inbox.