Continuing my series on big data geoanalytics, I wanted to show how to bring in large data sets so that we can start working with them. The data set we’ll use is the NYC taxi data that includes information on pickup and dropoffs. There are about 13 million records in a 2.2GB .csv file. That is not insanely large, but it is large enough for us to start messing around with it (don’t worry, I have a few 20GB+ data sets that I am working with and will eventually show that to you as well).
This video below will walk you through the steps I took to load and prepare the NYC taxi data inside of Manifold Future. My next posts will begin to look at how we can begin interrogating the data source to find meaningful information.
I hope you enjoy the video. Please comment below – I’d love to hear what people think.
In my last video, I gave a short of mile-high view of how SQL can be used for big data geoanalytics. I want to dive a little deeper, and explore the idea of create linear features from a time-series of points.
Once again, using some basic SQL and spatial SQL, we can perform basic time-series analysis.
I’m enjoying making these videos, as they are helping me put my course on big data and GIS together. I hope you like them too. Please comment down below so that I know this is something the user community enjoys and is learning from.
Also, if you are interested in learning more about how to perform spatial SQL in Microsoft SQL Server, Postgres, or Manifold, visit my other site, www.gisadvisor.com to sign up for my online video courses.
I’m getting ready to create a course in big data analytics with GIS. I have lots of ideas as to what to do, but one thing I know is that I will be using spatial databases and SQL. I’ll also be using Manifold Future.
ESRI has recently introduced their ArcGIS GeoAnalytics Server, which will introduce many GIS professionals to big data analytics with GIS. They have some interesting scenarios and example data using NYC taxi cabs. I think these will be really good case studies.
This video (just shy of 20 minutes) will use SQL and Manifold to try and address these big data problems.
Keep an eye on my blog as I will be rolling out new ideas as I prepare my course for the Spring.
if you like the video, and want to learn more about how to improve your spatial database skills, check out my videos at www.gisadvisor.com.
Once again, I am continuing my role as a mentor in a National Science Foundation (NSF) Research Experience for Undergraduate program. This year we’ve decided to build a QGIS plug-in for terrain analysis, as it is embarrassingly parallel (slope, aspect, etc.). We are doing four things as we generate slope for different size digital elevations models:
- A pure python implementation (for an easy plug-in)
- A serial-based C++ implementation (as a baseline for a non-parallel solution)
- A pyCUDA implementation (using the GPU for parallel processing)
- A C++ based parallel solution using the GPU
We plan to put our results on a shared GitHub site (we are in the process of cleaning up the code) so that people can start experimenting with it, and use our example to begin generating more parallel solutions for QGIS (or GDAL for that matter).
Here are some early results: Continue reading
My good friend Stuart Hamilton gave me a fun conundrum to try out. He has a file of province boundaries (400 areas) and lidar derived mangrove locations (37 million points – 2.2GB in size). He wants to find the number of mangroves that are contained in each area. He also want to know which country a mangrove location is in. An overview of the area is here:
but, as you zoom in, you can see that there are a tremendous number of points:
You would think that overlaying 37 million points with 400 polygons wouldn’t be too much trouble – but, it was. Big time trouble. In fact, after running for days in ArcGIS, Manifold GIS, PostGRES/PostGIS, and spatial Hadoop, it simply would not complete. Continue reading