Big Data Results

I wanted to revisit the taxi data example that I previously blogged about.  I had a 6GB file of 16 million taxi pickup locations and 260 taxi zones.  I wanted to determine the number of pickups in each zone, along with the sum of all the fares.  Below is a more in-depth review of what was done, but for those of you not wanting to read ahead, here are the result highlights:

Platform Command Time
ArcGIS 10.4 AddJoinManagement Out of memory
ArcGIS Pro Summarize Within 1h 27m*
ArcGIS Server Big Data GeoAnalytics with Big Data File Share Summarize Within

Aggregate Points

~2m
Manifold 9 GeomOverlayContained 3m 27s
Postgres/PostGIS ST_Contains 10m 30s
Postgres/PostGIS (optimized) ST_Contains 1m 40s
*I’m happy ArcGIS Pro ran at this speed, but I think it can do better.  This is a geodatabase straight out of the box. I think we can fiddle with indexes and even structuring the data to get things to run faster.  That is something I’ll work on next week.

I was sufficiently impressed with how some of the newer approaches were able to improve the performance.  Let’s dive in:

The Data and Computer

The data was obtained from the NYC Taxi and Limousine Commission for October 2012.  The approximately 16 million taxi pickup locations and 263 taxi zone polygons required around 6GB of storage.  I have the data in a geodatabase here.  You can see below that this is a lot of data:

taxis

I used my Cyberpower gaming PC which has a Windows 10, i7 processor (4 cores), solid-state drive, 12GB of RAM, and has a 3.0ghz processor.   So, pretty much what every teenager has in their bedroom.

The Question

The question I wanted to know was: how many taxi pickups were there for each zone, and what was the total amount of the fare?  Fair question (no pun intended!).  So, I decided to try to answer this question with ArcGIS, Manifold, and Postgres.

ArcGIS 10.4

As most of you know, ArcGIS 10.4 is a 32-bit application.  So, I wondered how well it could tackle this problem.  I attempted to perform a spatial table join (AddJoin_Management) between the taxi pickup locations and the taxi zones.  In order to give ArcGIS a fighting chance, I moved the data into a geodatabase (that way, the layers would have spatial indexes).  After running the join for a few hours, ArcGIS 10.4 reported an Out of Memory error.

ArcGIS Pro

Next, I moved on to ArcGIS Pro, which is a true 64-bit application.  Also, ArcGIS Pro has a number of tools to do exactly what I want.  One was Summarize Within.   ESRI makes it really easy to ask these sorts of questions in ArcGIS Pro.  So, I ran the function, and got a resulting table in 1h 27m.  At this point in my experiment, I was fairly pleased – at least I got an answer, and it is something I could do over a lunch break.

ArcGIS Server with GeoAnalytics Server

I knew that ESRI was touting their new GeoAnalytics Server, so I wanted to give that a try.   Unfortunately, I do not own GeoAnalytics Server.  Fortunately, a friend owns it, and was able to test it out on his computer.  To my amazement, he ran the query in about 2m.  I was astounded – hats off to ESRI.  This product is designed for big data for sure.  I would say if you have an ArcServer license, this is something worth checking out for big data processing.  Nothing cryptic like Hadoop – the same ArcGIS Pro interface is there to run the data under the GeoAnalytics server.

Manifold 9

As most of you know, I am a big fan of Manifold GIS, and have often discussed my work with the product.  Manifold 9 is designed for big data analytics.  They have a query engine that makes use of parallel processing.  The function I used was GeomOverlayContainedPar.  It actually works as a GUI, but I bypassed that and just wrote a straight-up SQL query which is a bit more flexible:

SELECT s_mfd_id AS [mfd_id], sgeom AS [Geom], sumfare, avgfare, s_zone, numrides
INTO sumtable
FROM (
SELECT s_mfd_id, count(o_mfd_id) AS numrides, avg([o_fare_amount]) AS avgfare,sum([o_fare_amount]) AS sumfare, first(s_geom) AS sgeom, first(s_zone) as s_zone
FROM 
  (
   SELECT s_zone, o_fare_amount, s_mfd_id, s_geom, o_mfd_id
   FROM CALL GeomOverlayContainedPar([taxi_zones] ([mfd_id], [zone], [Geom]),
   [pickup Drawing] ([pu_geom], [mfd_id], [fare_amount]), 0,
  ThreadConfig(SystemCpuCount()))
   )
GROUP BY s_mfd_id)

I won’t go into detail on the query, but in this case, I was using all 4 cores (actually 8, when you consider the hyperthreading) to process the data.  The query ran and returned the table in 3m 27s.  Again, I was sufficiently impressed, given that Manifold 9 sells for around $425.

I like to needle my friends at Manifold, so I sent them the data and the results, so stay tuned, I’d be willing to bet that we see them get under 3 minutes fairly soon.

Postgres/PostGIS

It’s no secret that I’m also a fan of FOSS4g software like Postgres, and I teach a number of courses in the use of Postgres.  So, I wanted to see how this would run in Postgres with PostGIS.  The first thing I did was create a straight-up SQL statement:

SELECT count(*) AS totrides,taxizones.zone, sum(taxisandy.fare_amount)
FROM taxizones, taxisandy
WHERE ST_Contains(taxizones."Geom",taxisandy.pu_geom)
GROUP BY zone

Good grief, it doesn’t get much simpler than that.   This query ran in 10m 27s.  I was pleased with this result.  I mean afterall, it’s free!  And, that query is super simple to write.  But I wasn’t done yet.  I knew there were some ways to optimize things.

Postgres/PostGIS optimized

I had already created a spatial index, so that was good.  But, there were two more things I was hoping to do: vacuum the table, and cluster the data.  So, what do these queries do:

VACUUM reclaims storage occupied by dead tuples. In normal PostgreSQL operation, tuples that are deleted or obsoleted by an update are not physically removed from their table; they remain present until a VACUUM is done

CLUSTER physically reorders the data on the disk so that data that should be near one another in the database are actually near one another on the disk.  In other words, points in Brooklyn are now physically stored on the disk near other points in Brooklyn, and the same is true for all the other Burroughs.  I wasn’t sure if this would do anything, since I already had a solid-state drive.  A friend of mine in the Computer Science Department told me that it would.  I would tell you what he said, but quite frankly his explanation was too technical for me!

So, how did I do this.  First, I vacuumed and clustered the data:

VACUUM ANALYZE taxizones ("Geom"); 
VACUUM ANALYZE taxisandy (pu_geom);
CLUSTER taxisandy USING pugeom_idx; 
CLUSTER taxizones USING "Geom_x";

Now, running the cluster on the pickup locations did in fact take time – 18 minutes.  That is a one time expense we pay.  After that, we can run whatever query we want, over and over again.  The query is a little more involved than the previous one because I wanted to write the results to a new table so I had to rejoin the table with the zones:

SELECT taxizones."Geom", sumfare, a.zone
INTO sumtable
FROM taxizones, 
(SELECT taxizones.zone, sum(taxisandy.fare_amount) AS sumfare
FROM taxizones
JOIN taxisandy
ON ST_Contains("Geom", pu_geom)
GROUP BY zone) AS a
WHERE taxizones.zone = a.zone

Drum roll, please. The query completed in 1m 40s.  Wow!  Of course, with PostGIS you have to factor in the cost: $0.  I guess you get what you pay for????

So, what is the takeaway?  Well, GIS products are evolving, and are now positioned to handle really large data sets in ways we hadn’t been able to do before.  I’m impressed with each of these products.

Two final notes:

If you live in the Denver area, please come and visit with me as I teach two workshops on FOSS4g, big data geoanalytics, Python and SQL: one at Colorado State University in Fort Collins on October 25, and one in Denver at Metropolitan State University of Denver (October 26).  I’d love to see you there!

And as always, all my online video courses are available at www.gisadvisor.com.   For this, you might find Spatial SQL with PostGIS, and Big Data Analytics with GIS to be two very useful courses to pursue this kind of work further. 

 

 

Finding “Dangles” with PostGIS

Do you have a set of lines that you need to determine if there are any “dangle” nodes?  A dangle is a line segment that overhangs another line segment.  Now, some dangles are valid, like a pipe that terminates in a cul-de-sac.

A few people have posted about this already, but I figured I would give it a shot as well, as I think my SQL is a little more terse.  Anyway, here is the query, and we’ll talk about it line by line:

SELECT DISTINCT g1 ASINTO dangles
FROM plines, 
    (SELECT g AS g1 FROM  
         (SELECT g, count(*) AS cnt  
          FROM  
              (SELECT  ST_StartPoint(g) AS g FROM plines
               UNION ALL
               SELECT  ST_EndPoint(g) AS g FROM plines ) AS T1 
         GROUP BY g) AS T2
     WHERE cnt = 1) AS T3
WHERE ST_Distance(g1, g) BETWEEN 0.01 AND 2;

The first thing to notice is the most inner select statement.  We are using ST_StartPoint and ST_EndPoint to grab the endpoints of the lines – these we’ll call nodes.

The next line to notice is where we are getting the count of the nodes.  We are grabbing all the nodes, but using the GROUP BY function to return the number of nodes that occupy a place in space.  Now, an intersection of two lines will have 2 nodes (from the first line and the second line).  But, a “dangle” will only have one node occupying a space.  This is where the next section of SQL comes into play.

What we want to do is only select those nodes where the count (cnt) is equal to 1.  That means the node is just sitting there in space.  It is a “dangle”.  But, not all dangles are created equally, as I said above.  That final WHERE clause lets me specify how far I want a dangle node to be from another node.  In the example above, we are choosing under 2m apart.  The last bit of SQL we have to consider is the DISTINCT clause.  Nodes can be near one or more lines.  We don’t want to double count them, so using DISTINCT eliminates the duplicates.

That’s it!  Pretty easy.  Think of the ST_Distance function as a variant of the basic SQL to find dangles.  There are other variants we could add to this if we’d like, such as the length of the line the dangle touches has to be less than 5m, or something like that.  That would be just a matter of adding another WHERE clause.

 

 

Multi-Ring (non-overlapping) Buffers with PostGIS

I was interested in creating mult-ring buffers but with a twist: I didn’t want the buffers to overlap one another.  In other words, if I had concentric buffers with distances of 100, 200, and 300 around a point, I want those buffers to reflect distances of 0-100, 100-200, and 200-300.  I don’t want them overlapping one another.  You can actually do that with the PostGIS function ST_SymDifference, but there are a few nuances that you have to be aware of.

Unlike some of my longer videos, this one will start out with the answer, and then we’ll walk through all the SQL.  You’ll see it isn’t so bad.  And, you continue to see that spatial is not special!.  It’s only 20 minutes long, but the answer is shown in the first minute.

In the video I’ll slowly walk you through all the spatial SQL to create buffers for the points and trim all the overlaps so that there are no overlapping buffers.  You’ll learn some really cool Postgres commands  including:

 ST_BufferST_DifferenceSymDISTINCT ON, and SET WITH OIDS.

I found myself amazed that with a few SQL tweaks, we were able to turn ordinary buffers to more useful non-overlapping buffers.  I hope you enjoy the video.

I’d like to create more videos like that – please leave so comments below so that I know others want me to continue these kinds of tutorials.

 If you want to learn more about SQL, programming, open source GIS, or Manifold GIS, check out courses at www.gisadvisor.com.  

GIS Analysis of Overlapping Layers

overlayoverlapMy friend is attempting to quantify the area of different landuse values for different areas that are upstream from her sample points.  This means she needs sample points, landuse, and upstream areas (i.e. sub-watersheds).  The problem is, her watersheds overlap, the buffer distances around the sample points overlap themselves AND the watersheds, and she then needs to summarize the results.  It’s actually a tricky problem due to the overlaps: GIS software doesn’t really like when features within a single layer overlap one another.  Also, if a buffer for a sample point overlaps two different watersheds, that becomes tricky too.

Sure you can solve it with a few for loops,  inserting the results into a new table, but that really is a hassle.  Also, I have to do it for different distances and different land cover types.

So, I once again turned to SQL – remember what I keep telling you – spatial is not special.  It’s just another data type.  This video steps you through performing a multi-ring buffer on overlapping objects from 3 different layers: sample points, watersheds, and land use.  As we step through the SQL, you’ll see how easy it is to put the query together.  And, at the end, you’ll see how flexible the query is should you want to change your objectives.  And, for good measure, we’ll throw in a little bit of parallel processing.

Big Data GeoAnalytics – adding data

Continuing my series on big data geoanalytics, I wanted to show how to bring in large data sets so that we can start working with them. The data set we’ll use is the NYC taxi data that includes information on pickup and dropoffs. There are about 13 million records in a 2.2GB .csv file. That is not insanely large, but it is large enough for us to start messing around with it (don’t worry, I have a few 20GB+ data sets that I am working with and will eventually show that to you as well).

This video below will walk you through the steps I took to load and prepare the NYC taxi data inside of Manifold Future. My next posts will begin to look at how we can begin interrogating the data source to find meaningful information.

I hope you enjoy the video. Please comment below – I’d love to hear what people think.

 

Big Data GeoAnalytics – Turning Points to Lines

In my last video, I gave a short of mile-high view of how SQL can be used for big data geoanalytics.  I want to dive a little deeper, and explore the idea of create linear features from a time-series of points.

Once again, using some basic SQL and spatial SQL, we can perform basic time-series analysis.

I’m enjoying making these videos, as they are helping me put my course on big data and GIS together.  I hope you like them too.  Please comment down below so that I know this is something the user community enjoys and is learning from.

Also, if you are interested in learning more about how to perform spatial SQL in Microsoft SQL Server, Postgres, or Manifold, visit my other site, www.gisadvisor.com to sign up for my online video courses.

Big data geo-analytics with SQL

I’m getting ready to create a course in big data analytics with GIS.  I have lots of ideas as to what to do, but one thing I know is that I will be using spatial databases and SQL.  I’ll also be using Manifold Future.

ESRI has recently introduced their ArcGIS GeoAnalytics Server, which will introduce many GIS professionals to big data analytics with GIS.  They have some interesting scenarios and example data using NYC taxi cabs.  I think these will be really good case studies.

This video (just shy of 20 minutes) will use SQL and Manifold to try and address these big data problems.

Keep an eye on my blog as I will be rolling out new ideas as I prepare my course for the Spring.

if you like the video, and want to learn more about how to improve your spatial database skills, check out my videos at www.gisadvisor.com.

Maryland State GIS Conference (TuGIS)

The TuGIS training workshop on March 20, 2017 is completed – you can see the workshop evaluations below:  

The workshop evaluations are in

(if you want to cut to the chase, the workshop results are here).

I had a great time teaching our two workshops at the TuGIS conference.  In the morning, my students and I presented Spatial SQL: A Language for Geographers, and in the afternoon we taught Python for Geospatial.

We knew expectations would be high: both courses sold-out in 2 days, and we even expanded the class size to 38 people for each workshop!!  I knew that teaching 38 people would be a challenge, but it would also be a great lesson to see if we could corral so many cats into a single, technical workshop.  The workshop evaluations would be crucial to determine if we met our objectives.

The workshop evaluations were overwhelmingly positive.  For example:

  1. over 90% said they enjoyed the workshop.
  2. over 83% said it was much better than other GIS training they have been to.
  3. on a scale of 1-10, 95% of the attendees rated the course a 7 or above.
  4. 93% said they learned something new in the workshop.
  5. 89% said the workshops would help them in their careers.
  6. 91% said they would apply these skills to their job.

I decided to throw one curve-ball on the evaluation sheet and asked:

This was a half-day workshop. Most one-day GIS training classes cost around $600/day. If we developed other in-depth full-day workshops on topics like this for under $250, how likely would you be to participate in it?

it turned out that 89% of the respondents rated a 7 or higher, indicating that almost 90% of the people valued the training enough to pay $250 for a full day course (opposed to $600 for most GIS courses).  This means it is possible to offer really good, low cost training to GIS professionals.  Keep an eye out on this, as I am very likely to take these training classes on the road.

The comments the participants provided were great – it confirmed our belief that this was an excellent training course, and that the course needed to be expanded to 8 hours, rather than 4 hours – most everyone felt like their was simply too much information to absorb.

If you would like to see the results of the workshop evaluation, click the link below:

TuGIS Workshops – Google Forms

Finally, if you can’t make it to a live workshop, all of my video training courses are $30 or less, when you visit www.gisadvisor.com.  These courses can’t get into the level of depth that a live course gives, but you’ll see that after thousands of students taking the courses, close to 90% of them give the course 4 starts out of 5!

 

Radian can read ESRI geodatabases

radianI just got the new build of Radian Studio, and now that it can directly read geodatabases, you can link directly to the geodatabase from inside of Radian and perform Radian spatial queries on it.  In this video, I’m linking directly to an ESRI geodatabase, creating a small map display of the data, performing a spatial clip of two vector layers, and returning the results.   In my previous video, I showed how Radian studio can read data directly from PostgreSQL, SQLite, and also easily exchange data between them.   This is just another example of how Radian can manage disparate GIS databases.

 

l hope you like the video, if so, consider learning more about Radian Studio in my online course here.

Workshops at the Maryland Geospatial Conference

tugisThe Maryland’s Geospatial Conference  () is on March 20/21, 2017.  I first attended TUgis in 1990, and it is always a great conference.  It is not too large, so it is  great way to have extended time with people.  So, if you had a technical question for someone from say ESRI, you could simply stop by their booth and have a chat.

This year I was asked to support the pre-conference workshops.  I will be presenting two workshops with the help of my students.  If you recall, my students are quite good at instructing others about GIS technology.  I’m really looking forward to the conference and interacting with people during the workshop.  Keep in mind, this is not something we are just throwing together – we’ve been spending a lot of time thinking about how to effectively move people through the material so that beginners do not get lost, and more technically savvy people are sufficiently challenged.  We are fanatical about making sure people’s learning experience is excellent.

A description of the courses are found here:

Spatial SQL: A Language for Geographers:  Are you stuck in a rut of only knowing how to use a GIS GUI? Do you want to learn how to automate tasks, but are afraid of computer programming. If so, SQL is the most powerful tool you can learn to help you perform complex GIS tasks. This hands-on course is designed to teach you how SQL can replicate many database and GIS tasks. We will start at a very basic overview and then proceed to more advanced topics related to GIS.

Topics to include:

  • Spatial is NOT Special
  • SQL Data Types
  • Traditional SQL
  • Spatial SQL for Vector and Raster Analysis
  • Spatial SQL for Classic Geographic Analysis

For this class, we’ll be using spatiaLite which is the spatial extension used with SQLite.  This is a great way to get started, as it is very similar to the functionality of Postgres/PostGIS.  If you want to move to enterprise GIS with Postgres or even Oracle or SQLServer, you’ll be in really good shape.

Python for Geospatial: If you are in the field of GIS, you’ve probably heard everyone talking about Python, whether it’s Arcpy in ArcGIS or special Python packages for doing things in open source.  In this hands-on workshop you will learn how Python is used to perform GIS analysis. The workshop will be an introduction to Python, with emphasis on integrating multiple Python plug-ins with ArcGIS and open source GIS.

Topics to include:

  • An overview of Python (variables, statements, I/O, writing code)
  • Python plug-ins for Geospatial (numpy, geocoder, pygal, Postgres)
  • A Taste of Arcpy
  • A Data Analytics Project with Python (for this, we will geocode addresses using Python, perform analysis with open source GIS, take the results into Arcpy to do more GIS analysis, compute statistical results with Python calling Excel, and then create charts and graphs of the results for use on the Internet—without ever opening up a single GIS product.)

If you want to learn more about how to use GIS technology, check out the 9 courses at gisadvisor.com.