I wanted to revisit the taxi data example that I previously blogged about. I had a 6GB file of 16 million taxi pickup locations and 260 taxi zones. I wanted to determine the number of pickups in each zone, along with the sum of all the fares. Below is a more in-depth review of what was done, but for those of you not wanting to read ahead, here are the result highlights:
|ArcGIS 10.4||AddJoinManagement||Out of memory|
|ArcGIS Pro||Summarize Within||1h 27m*|
|ArcGIS Server Big Data GeoAnalytics with Big Data File Share||Summarize Within
|Manifold 9||GeomOverlayContained||3m 27s|
|Postgres/PostGIS (optimized)||ST_Contains||1m 40s|
*I’m happy ArcGIS Pro ran at this speed, but I think it can do better. This is a geodatabase straight out of the box. I think we can fiddle with indexes and even structuring the data to get things to run faster. That is something I’ll work on next week.
I was sufficiently impressed with how some of the newer approaches were able to improve the performance. Let’s dive in:
The Data and Computer
The data was obtained from the NYC Taxi and Limousine Commission for October 2012. The approximately 16 million taxi pickup locations and 263 taxi zone polygons required around 6GB of storage. I have the data in a geodatabase here. You can see below that this is a lot of data:
I used my Cyberpower gaming PC which has a Windows 10, i7 processor (4 cores), solid-state drive, 12GB of RAM, and has a 3.0ghz processor. So, pretty much what every teenager has in their bedroom.
The question I wanted to know was: how many taxi pickups were there for each zone, and what was the total amount of the fare? Fair question (no pun intended!). So, I decided to try to answer this question with ArcGIS, Manifold, and Postgres.
As most of you know, ArcGIS 10.4 is a 32-bit application. So, I wondered how well it could tackle this problem. I attempted to perform a spatial table join (AddJoin_Management) between the taxi pickup locations and the taxi zones. In order to give ArcGIS a fighting chance, I moved the data into a geodatabase (that way, the layers would have spatial indexes). After running the join for a few hours, ArcGIS 10.4 reported an Out of Memory error.
Next, I moved on to ArcGIS Pro, which is a true 64-bit application. Also, ArcGIS Pro has a number of tools to do exactly what I want. One was Summarize Within. ESRI makes it really easy to ask these sorts of questions in ArcGIS Pro. So, I ran the function, and got a resulting table in 1h 27m. At this point in my experiment, I was fairly pleased – at least I got an answer, and it is something I could do over a lunch break.
ArcGIS Server with GeoAnalytics Server
I knew that ESRI was touting their new GeoAnalytics Server, so I wanted to give that a try. Unfortunately, I do not own GeoAnalytics Server. Fortunately, a friend owns it, and was able to test it out on his computer. To my amazement, he ran the query in about 2m. I was astounded – hats off to ESRI. This product is designed for big data for sure. I would say if you have an ArcServer license, this is something worth checking out for big data processing. Nothing cryptic like Hadoop – the same ArcGIS Pro interface is there to run the data under the GeoAnalytics server.
As most of you know, I am a big fan of Manifold GIS, and have often discussed my work with the product. Manifold 9 is designed for big data analytics. They have a query engine that makes use of parallel processing. The function I used was GeomOverlayContainedPar. It actually works as a GUI, but I bypassed that and just wrote a straight-up SQL query which is a bit more flexible:
SELECT s_mfd_id AS [mfd_id], sgeom AS [Geom], sumfare, avgfare, s_zone, numrides INTO sumtable FROM ( SELECT s_mfd_id, count(o_mfd_id) AS numrides, avg([o_fare_amount]) AS avgfare,sum([o_fare_amount]) AS sumfare, first(s_geom) AS sgeom, first(s_zone) as s_zone FROM ( SELECT s_zone, o_fare_amount, s_mfd_id, s_geom, o_mfd_id FROM CALL GeomOverlayContainedPar([taxi_zones] ([mfd_id], [zone], [Geom]), [pickup Drawing] ([pu_geom], [mfd_id], [fare_amount]), 0, ThreadConfig(SystemCpuCount())) ) GROUP BY s_mfd_id)
I won’t go into detail on the query, but in this case, I was using all 4 cores (actually 8, when you consider the hyperthreading) to process the data. The query ran and returned the table in 3m 27s. Again, I was sufficiently impressed, given that Manifold 9 sells for around $425.
I like to needle my friends at Manifold, so I sent them the data and the results, so stay tuned, I’d be willing to bet that we see them get under 3 minutes fairly soon.
It’s no secret that I’m also a fan of FOSS4g software like Postgres, and I teach a number of courses in the use of Postgres. So, I wanted to see how this would run in Postgres with PostGIS. The first thing I did was create a straight-up SQL statement:
SELECT count(*) AS totrides,taxizones.zone, sum(taxisandy.fare_amount) FROM taxizones, taxisandy WHERE ST_Contains(taxizones."Geom",taxisandy.pu_geom) GROUP BY zone
Good grief, it doesn’t get much simpler than that. This query ran in 10m 27s. I was pleased with this result. I mean afterall, it’s free! And, that query is super simple to write. But I wasn’t done yet. I knew there were some ways to optimize things.
I had already created a spatial index, so that was good. But, there were two more things I was hoping to do: vacuum the table, and cluster the data. So, what do these queries do:
VACUUM reclaims storage occupied by dead tuples. In normal PostgreSQL operation, tuples that are deleted or obsoleted by an update are not physically removed from their table; they remain present until a VACUUM is done
CLUSTER physically reorders the data on the disk so that data that should be near one another in the database are actually near one another on the disk. In other words, points in Brooklyn are now physically stored on the disk near other points in Brooklyn, and the same is true for all the other Burroughs. I wasn’t sure if this would do anything, since I already had a solid-state drive. A friend of mine in the Computer Science Department told me that it would. I would tell you what he said, but quite frankly his verbose explanation left me confused and sleepy!
So, how did I do this. First, I vacuumed and clustered the data:
VACUUM ANALYZE taxizones ("Geom"); VACUUM ANALYZE taxisandy (pu_geom); CLUSTER taxisandy USING pugeom_idx; CLUSTER taxizones USING "Geom_x";
Now, running the cluster on the pickup locations did in fact take time – 18 minutes. That is a one time expense we pay. After that, we can run whatever query we want, over and over again. The query is a little more involved than the previous one because I wanted to write the results to a new table so I had to rejoin the table with the zones:
SELECT taxizones."Geom", sumfare, a.zone INTO sumtable FROM taxizones, (SELECT taxizones.zone, sum(taxisandy.fare_amount) AS sumfare FROM taxizones JOIN taxisandy ON ST_Contains("Geom", pu_geom) GROUP BY zone) AS a WHERE taxizones.zone = a.zone
Drum roll, please. The query completed in 1m 40s. Wow! Of course, with PostGIS you have to factor in the cost: $0. I guess you get what you pay for????
So, what is the takeaway? Well, GIS products are evolving, and are now positioned to handle really large data sets in ways we hadn’t been able to do before. I’m impressed with each of these products.
Two final notes:
If you live in the Denver area, please come and visit with me as I teach two workshops on FOSS4g, big data geoanalytics, Python and SQL: one at Colorado State University in Fort Collins on October 25, and one in Denver at Metropolitan State University of Denver (October 26). I’d love to see you there!
And as always, all my online video courses are available at www.gisadvisor.com. For this, you might find Spatial SQL with PostGIS, and Big Data Analytics with GIS to be two very useful courses to pursue this kind of work further.