A Typical Class Project at Salisbury University: Evaluating Geocoding Accuracies

I’ve always been proud of our Salisbury University GIS students, and love to push them as far as their little minds can handle it.  You may recall that last Spring I had my Advanced GIS students perform independent GIS projects and present those projects as posters at an Undergraduate Research Conference.  Well, this Fall I am teaching GIS Programming, and have 7 awesome students (pictures and bios to follow).  We started the year off learning spatial SQL with Postgres and PostGIS.  We have now moved into Python, which includes Arcpy as well as other Python packages.

The semester was going so well, and the students were so responsive to anything I asked, I said what the heck, let’s try something crazy.  So, I showed the students how to use two Python geocoding packages (geocoder and censusgeocode) and then said:

why don’t we conduct a research project over the next week to test the match rates and positional accuracies of the Google API and the United Census Bureau API.  

So yeah, I gave them a week to put this together: design, code, analyze, and write.  And, like most of my students at this level, they didn’t disappoint me.  This meant they had to integrate a lot of what they have learned over the years (programming, GIS, statistics, etc.).

I just uploaded their work onto researchgate:

 Click for ResearchGate Article  

I was surprised by how little there is out there in terms of quantitative assessment of geocoding accuracies.  I hope you have a chance to click on the link and check out the working paper (we will submit it to a journal sometime soon).  Also, I included a short abstract below so that you can see the results of our work (note: our paper includes the original data and the source code for performing the geocoding):

Undergraduate Research in Action: Evaluating the positional differences between the Google Maps and the United States Census Bureau geocoding APIs

Abstract:  As part of a class assignment in GIS Programming at Salisbury University, students evaluated 106 geographically known addresses to determine the match rate and positional accuracy obtained using the Google and the United States Census Bureau geocoding application programming interface (API)s. The results showed that 96.2% of the addresses supplied by the Google API were successfully geocoded, while 84% of the addresses supplied by the Census Bureau API were successfully geocoded.  Further, the Google API matched 90% of the addresses with a ROOFTOP designation.  The average positional accuracy of the Google derived addresses were 80m overall, and 65m for those geocoded with the ROOFTOP designation while the Census Bureau positional accuracy was 271.09m.  

So yeah, this is what you can do with 7 GIS undergraduates at Salisbury University: they work hard, fast, and are a very creative bunch.

paper

 

 

Open Source GIS Course at UMD – the Syllabus

UMD provided me with enrollment numbers, and we are getting a pretty full class.  However, a few people on the “outside” have asked for a little more depth into what will be covered.  So, I put together a quick outline on each of our class meetings.

Keep in mind, in addition to the class meetings, there will be weekly discussions and short assignments.  Each class session is 2.5 hours long.  Those sessions colored orange are classes that I will be on the campus live (please note that this is tentative, and the dates may slide around a bit), giving the lecture and assisting students in the lab.  The other class sessions are a combination of online video lectures or in laboratory exercises.  For those with full time jobs, most of the material can be performed at home on you own devices, although being in the lab with the TA will be helpful.

Again, please feel free to email  geog-gis@umd.edu to get more information about signing up for the class. Continue reading

k-Nearest Neighbor with PostGRES

Well, even though you didn’t ask for it, I decided to create a k-nearest neighbor analysis in PostGRES/PostGIS.  This has lots of potential: find the 5 nearest ATMs to every fast food restuarant; find the 3 closest medical clinics to each school; or find the 10 closest bars to each college.

Now, the LIMIT clause does allow us to find the k closest features to a particular feature.  But in this case, I’m not interested in a particular feature, I want the results for all features.

To do this in Postgres, I’m going to show you two things that I haven’t used until today: rank() and partition. Continue reading

k-nearest neighbor with SQL in Radian Studio

I wanted to give you another look at some features that Radian Studio will offer. I’ve shown how we can use SQL to replicate the ARC/INFO NEAR function, and how to perform Nearest Neighbor Analysis. But, another useful took is the ability to identify k-nearest neighbors. That is, rather than just identifying the nearest neighbor, you might want to identify the two, three, or k nearest neighbors.

Radian will allow that functionality by using the COLLECT aggregate clause. The COLLECT aggregate collects values from a subgroup into a table, returning a table with one or more fields.

it is like a SELECT which runs on a group. COLLECT takes a table and returns a table without requiring us to write a FROM section as we would with a SELECT. This is stuff that the real grown up databases like Oracle use, and Manifold is going to give it to us as part of Radian Studio.


SELECT park_no1,
SPLIT(COLLECT park_no2, dist
ORDER BY dist ASC FETCH 3
)
FROM
(
SELECT a.name AS park_no1, b.name AS park_no2,
GeomDistance(a.[geom (i)], b.[geom (i)], 0) AS dist
FROM [parks Table] AS a , [parks Table] AS b
WHERE a.name <> b.name
)
GROUP BY park_no1

Continue reading

Trade Area Analysis in Postgres / PostGIS

How I used SQL to generate an accumulated sum of spatial data

My friend Tomas does work with business analytics, and wanted to find a way to perform trade area analysis.  He had a bunch of stores, and a map of census areas with populations.  What he wanted to figure out was:

how far do I have to radiate out from each store before I hit 5,000 customers

So for each store, he wanted to basically generate concentric buffers and sum up the population of the census areas before he hit 5,000.  Once he found the census areas, he also wanted to generate a convex hull to show the area.  ESRI has a really nice tool that performs this as part of their business analyst package, called threshold trade areas.

Check it out, it seems pretty cool.

Well, to help my friend I was thinking that I would determine the distance from every store to every census area, and then write a script with a giant for loop to iterate through each record, accumulating the results (of course, I would have to integrate some kind of do..while loop and an if…then to make sure I accumulated different counts for each store until I hit the threshold I wanted.  At that point I began asking myself so, how good of friend is Tomas?

What I did instead was write an SQL script to do it. I’ve color coded it below to explain what I was doing….

SELECT ST_ConvexHull(ST_Collect(g)) as geometry, max(sumpop) as sumpop, name
INTO tradearea
FROM
 (
  SELECT a.name, SUM(a.totpop) AS sumpop, ST_Collect(a.geometry) as g
  FROM 
     (SELECT stores.name, censusunit.totpop, censusunit.geometry,
      ST_Distance(censusunit.geometry,stores.geometry) as dist
      FROM stores, censusunit
      ) AS a,
     (SELECT stores.name, censusunit.totpop, censusunit.geometry,
      ST_Distance(censusunit.geometry,stores.geometry) as dist
      FROM stores, censusunit
      ) AS b
  WHERE a.name = b.name
  AND a.dist <= b.dist
  GROUP BY a.name, b.dist
  ) AS T1
WHERE sumpop < 5000
GROUP BY name

The middle portion in orange collects the names of the stores, the population of the census areas, the distance between each store and each census area, and the geometry of the census areas for each combination of stores and census areas.  So, if you have 5 stores and 1,000 census areas, you would have a table with 5,000 rows: Continue reading

SQL and ogr to create 4D lines

A friend recently asked me to help him generate polyline shapefiles with Z and M values that he could deliver to a customer.  The problem was, the software he was using supported the import of Z and M values, but did not support the export of those files.  The other problem was, he has zillions of data tables that he needed to export!

Fortunately, PostGIS allows us to create Polyline ZM data rather easily.  The next part was to figure out how to get it exported to a shapefile.  So, here goes:

Assume you have a table with 4 columns: lineID, Z, M, and geometry.  The geometry field is a point feature that represent vertices on a line.  The lineID separates all those points by which line they are part of, and the Z and M values are, well, Z and M.

Make the Polyline ZM

The SQL command to create a 4D line is:

SELECT  lineid, 
ST_MakeLine(ST_MakePoint(ST_X(geometry), 
                         ST_Y(geometry), z, m)
            ) AS g
FROM zpts
GROUP BY lineid;

In this case, I am making a line (ST_MakeLine) as a series of X (ST_X), Y (ST_Y), Z, and M values.  Grouping the results of the query by the lineid allows me to create a line for each set of points. Continue reading

Doing a GROUP BY in ArcGIS

As most of you know, I am a big fan of spatial SQL.  It is my go-to tool whenever working with GIS.  But, I have seen too many people using ArcGIS get tripped up with trying to summarize the results of a spatial operation because ArcGIS does not support SQL.  So today, to spare my ArcGIS friends the trouble of writing large “for” loops in Arcpy to populate data that takes hours to run, I want to show you two lines of Arcpy to very quickly replicate the GROUP BY function in SQL:

Using my favorite GIS data set in Tompkins County, NY, assume we have two layers: parcels2007 and watersheds:parwat

  Continue reading

Spatial is Not Special – Quadrat Analysis

In our book we illustrated the use of quadrat analysis for determining whether points were random, clustered, or distributed.  Figure 14.9 from the book showed a point sample of 2,500 points, and Table 14.4 showed the mathematical calculation for quadrat analysis.

Image

 

 

Image

The calculations look pretty daunting, don’t they?  But, in actuality, its basic arithmetic.  In this blog I am only going to illustrate how we obtained the correct variance to mean ratio using spatial SQL.  If you want to understand quadrat analysis, check out the book, or do a web search. Continue reading

Spatial is Not Special – Nearest Neighbor Index

 

It is nice to get back to the book, and start talking about Statistical Problem Solving in Geography again.  Today we are going to look at the Nearest Neighbor Index.  You can refer to chapter 14 where we illustrate the computation of the nearest neighbor index using a set of 7 points:

 nnfig

Then, for each point we determine the nearest neighbor, its distance, and ultimately the average nearest neighbor distance over all the points:

 nncalc

To develop the SQL, we will take it one step at a time.  First, we want to compute the distance from every point to every other point:

SELECT a.pt, b.pt,distance(a.[Geom (I)],b.[Geom (I)]) AS dist
FROM points AS a, points AS b
WHERE a.pt <> b.pt
ORDER BY dist

This query gives us a table of the distance from every point to every other point.  We also play that game again where we rename the table “points” as “a” and “b” so that SQL thinks we have two different tables.  We also have to put a WHERE clause in to make sure we aren’t measuring from one point to itself – because the distance will be 0. Continue reading