Big Data Results

I wanted to revisit the taxi data example that I previously blogged about.  I had a 6GB file of 16 million taxi pickup locations and 260 taxi zones.  I wanted to determine the number of pickups in each zone, along with the sum of all the fares.  Below is a more in-depth review of what was done, but for those of you not wanting to read ahead, here are the result highlights:

Platform Command Time
ArcGIS 10.4 AddJoinManagement Out of memory
ArcGIS Pro Summarize Within 1h 27m*
ArcGIS Server Big Data GeoAnalytics with Big Data File Share Summarize Within

Aggregate Points

~2m
Manifold 9 GeomOverlayContained 3m 27s
Postgres/PostGIS ST_Contains 10m 30s
Postgres/PostGIS (optimized) ST_Contains 1m 40s
*I’m happy ArcGIS Pro ran at this speed, but I think it can do better.  This is a geodatabase straight out of the box. I think we can fiddle with indexes and even structuring the data to get things to run faster.  That is something I’ll work on next week.

I was sufficiently impressed with how some of the newer approaches were able to improve the performance.  Let’s dive in:

The Data and Computer

The data was obtained from the NYC Taxi and Limousine Commission for October 2012.  The approximately 16 million taxi pickup locations and 263 taxi zone polygons required around 6GB of storage.  I have the data in a geodatabase here.  You can see below that this is a lot of data:

taxis

I used my Cyberpower gaming PC which has a Windows 10, i7 processor (4 cores), solid-state drive, 12GB of RAM, and has a 3.0ghz processor.   So, pretty much what every teenager has in their bedroom.

The Question

The question I wanted to know was: how many taxi pickups were there for each zone, and what was the total amount of the fare?  Fair question (no pun intended!).  So, I decided to try to answer this question with ArcGIS, Manifold, and Postgres.

ArcGIS 10.4

As most of you know, ArcGIS 10.4 is a 32-bit application.  So, I wondered how well it could tackle this problem.  I attempted to perform a spatial table join (AddJoin_Management) between the taxi pickup locations and the taxi zones.  In order to give ArcGIS a fighting chance, I moved the data into a geodatabase (that way, the layers would have spatial indexes).  After running the join for a few hours, ArcGIS 10.4 reported an Out of Memory error.

ArcGIS Pro

Next, I moved on to ArcGIS Pro, which is a true 64-bit application.  Also, ArcGIS Pro has a number of tools to do exactly what I want.  One was Summarize Within.   ESRI makes it really easy to ask these sorts of questions in ArcGIS Pro.  So, I ran the function, and got a resulting table in 1h 27m.  At this point in my experiment, I was fairly pleased – at least I got an answer, and it is something I could do over a lunch break.

ArcGIS Server with GeoAnalytics Server

I knew that ESRI was touting their new GeoAnalytics Server, so I wanted to give that a try.   Unfortunately, I do not own GeoAnalytics Server.  Fortunately, a friend owns it, and was able to test it out on his computer.  To my amazement, he ran the query in about 2m.  I was astounded – hats off to ESRI.  This product is designed for big data for sure.  I would say if you have an ArcServer license, this is something worth checking out for big data processing.  Nothing cryptic like Hadoop – the same ArcGIS Pro interface is there to run the data under the GeoAnalytics server.

Manifold 9

As most of you know, I am a big fan of Manifold GIS, and have often discussed my work with the product.  Manifold 9 is designed for big data analytics.  They have a query engine that makes use of parallel processing.  The function I used was GeomOverlayContainedPar.  It actually works as a GUI, but I bypassed that and just wrote a straight-up SQL query which is a bit more flexible:

SELECT s_mfd_id AS [mfd_id], sgeom AS [Geom], sumfare, avgfare, s_zone, numrides
INTO sumtable
FROM (
SELECT s_mfd_id, count(o_mfd_id) AS numrides, avg([o_fare_amount]) AS avgfare,sum([o_fare_amount]) AS sumfare, first(s_geom) AS sgeom, first(s_zone) as s_zone
FROM 
  (
   SELECT s_zone, o_fare_amount, s_mfd_id, s_geom, o_mfd_id
   FROM CALL GeomOverlayContainedPar([taxi_zones] ([mfd_id], [zone], [Geom]),
   [pickup Drawing] ([pu_geom], [mfd_id], [fare_amount]), 0,
  ThreadConfig(SystemCpuCount()))
   )
GROUP BY s_mfd_id)

I won’t go into detail on the query, but in this case, I was using all 4 cores (actually 8, when you consider the hyperthreading) to process the data.  The query ran and returned the table in 3m 27s.  Again, I was sufficiently impressed, given that Manifold 9 sells for around $425.

I like to needle my friends at Manifold, so I sent them the data and the results, so stay tuned, I’d be willing to bet that we see them get under 3 minutes fairly soon.

Postgres/PostGIS

It’s no secret that I’m also a fan of FOSS4g software like Postgres, and I teach a number of courses in the use of Postgres.  So, I wanted to see how this would run in Postgres with PostGIS.  The first thing I did was create a straight-up SQL statement:

SELECT count(*) AS totrides,taxizones.zone, sum(taxisandy.fare_amount)
FROM taxizones, taxisandy
WHERE ST_Contains(taxizones."Geom",taxisandy.pu_geom)
GROUP BY zone

Good grief, it doesn’t get much simpler than that.   This query ran in 10m 27s.  I was pleased with this result.  I mean afterall, it’s free!  And, that query is super simple to write.  But I wasn’t done yet.  I knew there were some ways to optimize things.

Postgres/PostGIS optimized

I had already created a spatial index, so that was good.  But, there were two more things I was hoping to do: vacuum the table, and cluster the data.  So, what do these queries do:

VACUUM reclaims storage occupied by dead tuples. In normal PostgreSQL operation, tuples that are deleted or obsoleted by an update are not physically removed from their table; they remain present until a VACUUM is done

CLUSTER physically reorders the data on the disk so that data that should be near one another in the database are actually near one another on the disk.  In other words, points in Brooklyn are now physically stored on the disk near other points in Brooklyn, and the same is true for all the other Burroughs.  I wasn’t sure if this would do anything, since I already had a solid-state drive.  A friend of mine in the Computer Science Department told me that it would.  I would tell you what he said, but quite frankly his verbose explanation left me confused and sleepy!

So, how did I do this.  First, I vacuumed and clustered the data:

VACUUM ANALYZE taxizones ("Geom"); 
VACUUM ANALYZE taxisandy (pu_geom);
CLUSTER taxisandy USING pugeom_idx; 
CLUSTER taxizones USING "Geom_x";

Now, running the cluster on the pickup locations did in fact take time – 18 minutes.  That is a one time expense we pay.  After that, we can run whatever query we want, over and over again.  The query is a little more involved than the previous one because I wanted to write the results to a new table so I had to rejoin the table with the zones:

SELECT taxizones."Geom", sumfare, a.zone
INTO sumtable
FROM taxizones, 
(SELECT taxizones.zone, sum(taxisandy.fare_amount) AS sumfare
FROM taxizones
JOIN taxisandy
ON ST_Contains("Geom", pu_geom)
GROUP BY zone) AS a
WHERE taxizones.zone = a.zone

Drum roll, please. The query completed in 1m 40s.  Wow!  Of course, with PostGIS you have to factor in the cost: $0.  I guess you get what you pay for????

So, what is the takeaway?  Well, GIS products are evolving, and are now positioned to handle really large data sets in ways we hadn’t been able to do before.  I’m impressed with each of these products.

Two final notes:

If you live in the Denver area, please come and visit with me as I teach two workshops on FOSS4g, big data geoanalytics, Python and SQL: one at Colorado State University in Fort Collins on October 25, and one in Denver at Metropolitan State University of Denver (October 26).  I’d love to see you there!

And as always, all my online video courses are available at www.gisadvisor.com.   For this, you might find Spatial SQL with PostGIS, and Big Data Analytics with GIS to be two very useful courses to pursue this kind of work further. 

 

 

Work smarter – not larger

When you were in Statistics 101, and the Professor said ok, we are now going to learn about the Central Limit Theorem, did you tune out? Did you sarcastically say when is someone going to grab me and order me to tell them about the Central Limit Theorem? Come on, admit it, you did.  Well, so did I – I was 18 years old, and couldn’t care less.

Well, you know what? Understanding the Central Limit Theorem has really big implications for big data analytics. Check out this 20 minute video, and you’ll see that by applying the Central Limit Theorem and some statistical theory, you can approximate the results of an expensive multi-server implementation for interrogating really large databases.

I’ll show you how you can obtain very precise estimates on really large databases by simply applying some basic statistics you should have learned Freshman year (but you were too busy partying, weren’t you?)

 

stay tuned, I’ll be coming out with a big data analytics class in the New Year.  If you want to learn more about SQL, programming, open source GIS, or Manifold, check out courses at www.gisadvisor.com.  

Big Data GeoAnalytics – adding data

Continuing my series on big data geoanalytics, I wanted to show how to bring in large data sets so that we can start working with them. The data set we’ll use is the NYC taxi data that includes information on pickup and dropoffs. There are about 13 million records in a 2.2GB .csv file. That is not insanely large, but it is large enough for us to start messing around with it (don’t worry, I have a few 20GB+ data sets that I am working with and will eventually show that to you as well).

This video below will walk you through the steps I took to load and prepare the NYC taxi data inside of Manifold Future. My next posts will begin to look at how we can begin interrogating the data source to find meaningful information.

I hope you enjoy the video. Please comment below – I’d love to hear what people think.

 

Big Data GeoAnalytics – Turning Points to Lines

In my last video, I gave a short of mile-high view of how SQL can be used for big data geoanalytics.  I want to dive a little deeper, and explore the idea of create linear features from a time-series of points.

Once again, using some basic SQL and spatial SQL, we can perform basic time-series analysis.

I’m enjoying making these videos, as they are helping me put my course on big data and GIS together.  I hope you like them too.  Please comment down below so that I know this is something the user community enjoys and is learning from.

Also, if you are interested in learning more about how to perform spatial SQL in Microsoft SQL Server, Postgres, or Manifold, visit my other site, www.gisadvisor.com to sign up for my online video courses.

Big data geo-analytics with SQL

I’m getting ready to create a course in big data analytics with GIS.  I have lots of ideas as to what to do, but one thing I know is that I will be using spatial databases and SQL.  I’ll also be using Manifold Future.

ESRI has recently introduced their ArcGIS GeoAnalytics Server, which will introduce many GIS professionals to big data analytics with GIS.  They have some interesting scenarios and example data using NYC taxi cabs.  I think these will be really good case studies.

This video (just shy of 20 minutes) will use SQL and Manifold to try and address these big data problems.

Keep an eye on my blog as I will be rolling out new ideas as I prepare my course for the Spring.

if you like the video, and want to learn more about how to improve your spatial database skills, check out my videos at www.gisadvisor.com.

Parallel Processing with QGIS

Once again, I am continuing my role as a mentor in a National Science Foundation (NSF) Research Experience for Undergraduate program.  This year we’ve decided to build a QGIS plug-in for terrain analysis, as it is embarrassingly parallel (slope, aspect, etc.).   We are doing four things as we generate slope for different size digital elevations models:

  1. A pure python implementation (for an easy plug-in)
  2. A serial-based C++ implementation (as a baseline for a non-parallel solution)
  3. A pyCUDA implementation (using the GPU for parallel processing)
  4. A C++ based parallel solution using the GPU

We plan to put our results on a shared GitHub site (we are in the process of cleaning up the code) so that people can start experimenting with it, and use our example to begin generating more parallel solutions for QGIS (or GDAL for that matter).

Here are some early results: Continue reading

When More is Less…. lessons from processing large data files

My good friend Stuart Hamilton gave me a fun conundrum to try out. He has a file of province boundaries (400 areas) and lidar derived mangrove locations (37 million points – 2.2GB in size). He wants to find the number of mangroves that are contained in each area.  He also want to know which country a mangrove location is in.  An overview of the area is here:

allstu

but, as you zoom in, you can see that there are a tremendous number of points:

stuzoom

The problem

You would think that overlaying 37 million points with 400 polygons wouldn’t be too much trouble – but, it was.  Big time trouble.  In fact, after running for days in ArcGIS, Manifold GIS, PostGRES/PostGIS, and spatial Hadoop, it simply would not complete. Continue reading