Denver workshops – a mixture of sadness and joy.

denver1We just completed another two successful Keeping Up with GIS Technology workshops out in Denver.  This was a week mixed with sadness and joy: sadness in that my Mom passed away on Tuesday, and I had to fly out of town to Denver on Wednesday (the show must go on).  But, joy, as I was able to catch up with many former students, classmates, and friends in the Denver area.  Also, relief as I got to spend the morning with my mom, and then she passed away quickly, quietly, and most importantly, painlessly – reunited with my Dad.

OK, back to the workshop.  The hospitality shown by Beth Hill Tulanowski @hethbill, and David Parr @gisdaveparr was extraordinary.  I truly enjoyed my time with them in the midst of some sadness.  The responses to both workshops was excellent, and similar to my other workshops.  The following provides a brief overview:

d1

I always want to make sure that I am prepared, responsive, and interesting when I teach.  The chart surely shows that the students were in agreement.  But, enough about me!  The real key is if this workshop helps other professionals in their career:

d2

The participants were very positive on learning something new in their career, using these skills at work, and having the workshop help them in their professional development.  These results are actually the best I’ve seen over the half dozen or so workshops that I have given.  Why go to a workshop if you don’t learn something new?

I have to admit, this next one is sort of an insecurity on my part.  Like Sally Field said in her acceptance speech: you like me, you really like me.  Well, not difference with me.  I want to make sure that what I am doing is of high quality and appreciated, especially when compared to other training people can get.  People have choices, so I want to make sure that I am giving them a quality workshop.

d4

I was very pleased to see that over 80% of the participants thought that this workshop rated an 8 or above compared to other workshops they’ve taken.  This is especially true because training courses tend to be expensive.  I want to be sensitive to people’s financial situation, and would hate to offer a training course that is no better than anything else out there.

But again, previous to my other reviews, people really do believe that training is worth paying for if it teaches them something new:

d5

Once again, 80% of the participants indicated that they would be very favorable to pay a small fee to receive this kind of training (rating of 8 out of 10).   I think this is going to also allow Colorado State University and Metropolitan State at Denver to begin offering more training to the GIS community in their area.

There were some really positive written responses as well:

Art’s communication was excellent. It’s crazy complicated stuff (potentially) but he makes it seem more approachable and less fearsome
I particularly enjoyed learning about QGIS and SQL; I’d be interested in expanded workshops on these topics (especially SQL)
Being exposed to QGIS (and learning how user-friendly it is) was fantastic. As was learning more about SQL querying in Postgres. I found the Python section a bit more obtuse, but I appreciate exposure to the language as that was one of the reasons I wanted to take this workshop.

 

And, in all fairness, I do ask for negative feedback as well, so that I can continually improve what I am doing:

It is tough to have this much information presented in this time-frame. I liked it but may be better to break it down. Probably need a full day for each of the 4 topics.

It was a lot to pack into one day, so the instructor had to go really fast through everything. I would have liked to take out one of the sections and get in more hands-on work.

What I Learned

The more I do these workshops, the more a thought comes to my mind: once you graduate college, very few GIS professionals have a mentor in their lives.  My students hang out in lab with me, go to sporting events, and even come over to my house.  They get a lot of my good energy, and we have great conversations.  What a shame that professionals don’t get that opportunity.  I really want to introduce more of these workshops in the country and also internationally, so that GIS professionals can not only learn new skills, but also hang out with me at lunch, between breaks at the workshop, and even afterwards for drinks.  That way, they have an opportunity to bounce ideas off of me and get feedback.  So, with the winter break and the summer coming up, I am going to try and find a few cities to continue to offer my workshops.  Please let me know if you’d like me to visit your city.

Want to learn more about GIS technologies like QGIS, Postgres/PostGIS, Python, and spatial databases?  Check out my online courses at www.gisadvisor.com.

 

Follow up to my big data test – improving PostGIS performance

Just a quick follow-up to my big data test.  If you remember, I was able to determine the number of taxi pickups and the sum of the fares for each zone using Postgres and PostGIS in 1m 40s.  Some of the taxi zones are a little large, so the containment query might actually take a little longer when comparing the bounding boxes in the spatial index.  To get around that, I used ST_SubDivide to break the larger taxi zones into smaller polygons:

tsub

this meant that my taxi zone polygons went from 263 to 4,666.  Now, on the face level, what idiot would do an overlay with 4,666 polygons when 263 is smaller – this idiot!  To understand this, you should read my blog post on When More is Less, you’ll see there is good logic behind the madness.  Well, anyway, that’s what I did, and we went from 1m 40s down to 1m 3s.

For those of you interested, I broke the zones up as follows:

SELECT ST_Subdivide("Geom", 50) AS geom, zone
into taxisub FROM taxizones;

CLUSTER taxisub USING geom_idx;

and yes, CLUSTER once again made a big difference.

I guess I should explain the SQL this time around, as it enables us to do some clever things.  Remember, taxisub has 4,666 polygons because it has subdivided the 263 polygons in taxizones.

SELECT taxizones."Geom" AS geom, count(id) AS numrides, sumfare, a.zone
INTO sumtable
FROM taxizones, 
   (SELECT taxisub.zone, sum(taxisandy.fare_amount) AS sumfare
    FROM taxisub
    JOIN taxisandy
    ON ST_Contains(geom, pu_geom)
    GROUP BY zone) AS a
WHERE taxizones.zone = a.zone

In the above query, the inner portion is the straight-up SQL to determine the total rides and sum of the fares in each polygon in taxisub.  However, taxisub has over 4000 polygons – we don’t want to write that out.  So, the outer portion of the query is joining the original taxizones (the one with only 263 polygons), and writing it out to a final table.

Again, if you want to learn how to do more spatial SQL like this, check out my courses at http://www.gisadvisor.com.  

 

Big Data Results

I wanted to revisit the taxi data example that I previously blogged about.  I had a 6GB file of 16 million taxi pickup locations and 260 taxi zones.  I wanted to determine the number of pickups in each zone, along with the sum of all the fares.  Below is a more in-depth review of what was done, but for those of you not wanting to read ahead, here are the result highlights:

Platform Command Time
ArcGIS 10.4 AddJoinManagement Out of memory
ArcGIS Pro Summarize Within 1h 27m*
ArcGIS Server Big Data GeoAnalytics with Big Data File Share Summarize Within

Aggregate Points

~2m
Manifold 9 GeomOverlayContained 3m 27s
Postgres/PostGIS ST_Contains 10m 30s
Postgres/PostGIS (optimized) ST_Contains 1m 40s
*I’m happy ArcGIS Pro ran at this speed, but I think it can do better.  This is a geodatabase straight out of the box. I think we can fiddle with indexes and even structuring the data to get things to run faster.  That is something I’ll work on next week.

I was sufficiently impressed with how some of the newer approaches were able to improve the performance.  Let’s dive in:

The Data and Computer

The data was obtained from the NYC Taxi and Limousine Commission for October 2012.  The approximately 16 million taxi pickup locations and 263 taxi zone polygons required around 6GB of storage.  I have the data in a geodatabase here.  You can see below that this is a lot of data:

taxis

I used my Cyberpower gaming PC which has a Windows 10, i7 processor (4 cores), solid-state drive, 12GB of RAM, and has a 3.0ghz processor.   So, pretty much what every teenager has in their bedroom.

The Question

The question I wanted to know was: how many taxi pickups were there for each zone, and what was the total amount of the fare?  Fair question (no pun intended!).  So, I decided to try to answer this question with ArcGIS, Manifold, and Postgres.

ArcGIS 10.4

As most of you know, ArcGIS 10.4 is a 32-bit application.  So, I wondered how well it could tackle this problem.  I attempted to perform a spatial table join (AddJoin_Management) between the taxi pickup locations and the taxi zones.  In order to give ArcGIS a fighting chance, I moved the data into a geodatabase (that way, the layers would have spatial indexes).  After running the join for a few hours, ArcGIS 10.4 reported an Out of Memory error.

ArcGIS Pro

Next, I moved on to ArcGIS Pro, which is a true 64-bit application.  Also, ArcGIS Pro has a number of tools to do exactly what I want.  One was Summarize Within.   ESRI makes it really easy to ask these sorts of questions in ArcGIS Pro.  So, I ran the function, and got a resulting table in 1h 27m.  At this point in my experiment, I was fairly pleased – at least I got an answer, and it is something I could do over a lunch break.

ArcGIS Server with GeoAnalytics Server

I knew that ESRI was touting their new GeoAnalytics Server, so I wanted to give that a try.   Unfortunately, I do not own GeoAnalytics Server.  Fortunately, a friend owns it, and was able to test it out on his computer.  To my amazement, he ran the query in about 2m.  I was astounded – hats off to ESRI.  This product is designed for big data for sure.  I would say if you have an ArcServer license, this is something worth checking out for big data processing.  Nothing cryptic like Hadoop – the same ArcGIS Pro interface is there to run the data under the GeoAnalytics server.

Manifold 9

As most of you know, I am a big fan of Manifold GIS, and have often discussed my work with the product.  Manifold 9 is designed for big data analytics.  They have a query engine that makes use of parallel processing.  The function I used was GeomOverlayContainedPar.  It actually works as a GUI, but I bypassed that and just wrote a straight-up SQL query which is a bit more flexible:

SELECT s_mfd_id AS [mfd_id], sgeom AS [Geom], sumfare, avgfare, s_zone, numrides
INTO sumtable
FROM (
SELECT s_mfd_id, count(o_mfd_id) AS numrides, avg([o_fare_amount]) AS avgfare,sum([o_fare_amount]) AS sumfare, first(s_geom) AS sgeom, first(s_zone) as s_zone
FROM 
  (
   SELECT s_zone, o_fare_amount, s_mfd_id, s_geom, o_mfd_id
   FROM CALL GeomOverlayContainedPar([taxi_zones] ([mfd_id], [zone], [Geom]),
   [pickup Drawing] ([pu_geom], [mfd_id], [fare_amount]), 0,
  ThreadConfig(SystemCpuCount()))
   )
GROUP BY s_mfd_id)

I won’t go into detail on the query, but in this case, I was using all 4 cores (actually 8, when you consider the hyperthreading) to process the data.  The query ran and returned the table in 3m 27s.  Again, I was sufficiently impressed, given that Manifold 9 sells for around $425.

I like to needle my friends at Manifold, so I sent them the data and the results, so stay tuned, I’d be willing to bet that we see them get under 3 minutes fairly soon.

Postgres/PostGIS

It’s no secret that I’m also a fan of FOSS4g software like Postgres, and I teach a number of courses in the use of Postgres.  So, I wanted to see how this would run in Postgres with PostGIS.  The first thing I did was create a straight-up SQL statement:

SELECT count(*) AS totrides,taxizones.zone, sum(taxisandy.fare_amount)
FROM taxizones, taxisandy
WHERE ST_Contains(taxizones."Geom",taxisandy.pu_geom)
GROUP BY zone

Good grief, it doesn’t get much simpler than that.   This query ran in 10m 27s.  I was pleased with this result.  I mean afterall, it’s free!  And, that query is super simple to write.  But I wasn’t done yet.  I knew there were some ways to optimize things.

Postgres/PostGIS optimized

I had already created a spatial index, so that was good.  But, there were two more things I was hoping to do: vacuum the table, and cluster the data.  So, what do these queries do:

VACUUM reclaims storage occupied by dead tuples. In normal PostgreSQL operation, tuples that are deleted or obsoleted by an update are not physically removed from their table; they remain present until a VACUUM is done

CLUSTER physically reorders the data on the disk so that data that should be near one another in the database are actually near one another on the disk.  In other words, points in Brooklyn are now physically stored on the disk near other points in Brooklyn, and the same is true for all the other Burroughs.  I wasn’t sure if this would do anything, since I already had a solid-state drive.  A friend of mine in the Computer Science Department told me that it would.  I would tell you what he said, but quite frankly his explanation was too technical for me!

So, how did I do this.  First, I vacuumed and clustered the data:

VACUUM ANALYZE taxizones ("Geom"); 
VACUUM ANALYZE taxisandy (pu_geom);
CLUSTER taxisandy USING pugeom_idx; 
CLUSTER taxizones USING "Geom_x";

Now, running the cluster on the pickup locations did in fact take time – 18 minutes.  That is a one time expense we pay.  After that, we can run whatever query we want, over and over again.  The query is a little more involved than the previous one because I wanted to write the results to a new table so I had to rejoin the table with the zones:

SELECT taxizones."Geom", sumfare, a.zone
INTO sumtable
FROM taxizones, 
(SELECT taxizones.zone, sum(taxisandy.fare_amount) AS sumfare
FROM taxizones
JOIN taxisandy
ON ST_Contains("Geom", pu_geom)
GROUP BY zone) AS a
WHERE taxizones.zone = a.zone

Drum roll, please. The query completed in 1m 40s.  Wow!  Of course, with PostGIS you have to factor in the cost: $0.  I guess you get what you pay for????

So, what is the takeaway?  Well, GIS products are evolving, and are now positioned to handle really large data sets in ways we hadn’t been able to do before.  I’m impressed with each of these products.

Two final notes:

If you live in the Denver area, please come and visit with me as I teach two workshops on FOSS4g, big data geoanalytics, Python and SQL: one at Colorado State University in Fort Collins on October 25, and one in Denver at Metropolitan State University of Denver (October 26).  I’d love to see you there!

And as always, all my online video courses are available at www.gisadvisor.com.   For this, you might find Spatial SQL with PostGIS, and Big Data Analytics with GIS to be two very useful courses to pursue this kind of work further. 

 

 

Maryland GIS Conference: Workshop Results

We had a very successful workshop on GIS at the Maryland Geospatial Conference – well, actually two workshops.  I was asked to teach a 4-hour workshop on  GIS technology:

blurbThe workshop covered 4 different topics in 4 hours: Desktop GIS with QGIS, Server based GIS with Postgres/PostGIS, Developer GIS with Python, and finally Big Data Analytics with GIS.  That’s a lot of material in a short amount of time.  I wondered what the interest would be…

We planned to open this hands-on workshop up to 16 people.  The workshop coordinators came back and asked if we could expand the workshop to 20 people, and offer two sessions throughout the day – I said sure, not knowing if we’d fill it up.   Before I knew it, we had 31 people signed up for the morning session!!  We then put a final cap of 21 on the afternoon session, just to give me a break!  So, we had over 50 people attending these two workshops – this tells me there is a huge interest among professionals to obtain hands-on training in GIS (I plan to offer more training in the coming year) – or, you can join me in one of my online GIS workshops).  In fact, for each of the four topics, I will likely offer 1 and 2 day workshops on each topic to dive deeper into each topic.

It was a whirlwind to say the least.  There was a lot of advanced material to cover in a very short period of time.  But, was it actually beneficial to cram this much information into people’s brains?  The following survey results say absolutely.  While there were lots of questions, a few of them are particularly pertinent:

Q1The reason I do these workshops is because I want to help professionals in their career.  I was thrilled at the number of people who felt like the workshop taught them something new, made them want to apply these skills at work, and would help them in their professional development.  But, how does it compare with other workshops?

Q3When comparing this workshop (which was obviously drinking out of a fire hose) with other GIS training they received on a scale of 1-10, almost 80% of the people in attendance rated the workshop as an 8 or better.  So, do the people want more of this?

Q4

Almost 80% of the participants rated their willingness at an 8 when asked if they would pay for more training opportunities.   And, a few of the highlights of people’s comments were:

This workshop demonstrated how you can tie together various tools and data sources together seamlessly. Like the use of open source software as well.

As a novice, I appreciated learning the highlights of multiple technologies and packages related to GIS.

Using new programs to complete analyses is hard without an introduction. This workshop provided this in an easy to understand format.

The QGIS Into and the PostGIS were most valuable to me because I had other resources and exposure to Python already BUT I still enjoyed the Python component; Big Data could be an entire course or discussion on it’s own.

I found the demonstration of using large data sets most valuable.

These are exciting results, and I can’t wait to do more.  In fact, I’m in discussion with Colorado State University to give this workshop in October in Fort Collins for the University students, and in Denver for the professional GIS community.  We are making one change, however – I am going to expand this to a full day workshop.

So, how about you?  Would you be interested in having me come out to your organization or city to give a workshop on QGIS, Postgres, Python, and big data analytics with GIS?   I’d love to meet the GIS professionals in your area and give a workshop.

 

 

Is a Geography Degree worth it?

Recently, we were asked by our new University President, Chuck Wight, how our students are doing in obtaining their first job.  You have to understand, Chuck is fanatical about the student experience, and frequently gives talks on making college more affordable, and demonstrating that a college degree has value.  This was an excellent question.  What he was really doing was asking what is on the minds of so many parents:

is this degree that my child is getting worth the money we are going to pay

I’m embarrassed to say that most of us who heard this question only really had anecdotal evidence.  However, some of us came away from our meeting inspired by Chuck, instead of dejected.  Rather than wait to do something, we reached out to around 60 of our recent graduates (May 2018, December 2017) with a Google poll to see how they were doing immediately after graduation.

We asked basic things to start off: gender, major within the Department of Geography and Geosciences,  their individual track (i.e. atmospheric science, GIS, Planning), and then moved on to more interesting questions like whether they had an internship.  Finally, we asked two important questions about employment and paying for college.   The following is a brief review of the results.

Gender

Believe it or not, this was an important question.  We are only getting a sample, and the big question is whether our sample is any good, or if it is terribly biased.  One of the only things we know for sure is the gender of our students.  So, if the response to this question accurately reflected the gender make up of our graduates, then we could be fairly confident that the other questions were an accurate reflection of our Department.  This would allow us to make inferences about our recent graduating class.image1

Fortunately for us, the gender question was very similar to our actual graduating class makeup.  Therefore, we have greater confidence in the results of our other questions.  Given the large push to get more women in STEM activities, this also motivates us to increase the participation of women in our Department.

The Degree in General

Image 3From our survey, we see that most of our majors focused on Geography, and GIS and Atmospheric Science were the leading tracks that students concentrated on.

 

Image 4

These results give us really nice targets to consider, like how to increase enrollments in our other tracks.  In fact, because students can have multiple tracks, we can encourage them to consider pairing say their GIS track with Planning or Human Geography.

Internship

Image 5

We found that most of our students had internships, and most of those internships were paid.  This really speaks to the value proposition.  In our Department, we really value students obtaining real world experience in their major.  This is what makes them marketable.  The fact that 75% of our students had an internship tells me we are doing our job to plug these students into their field of study – it also tells me that this is something hard to improve upon.   This, we believe really makes a difference when it comes to landing that first job.  I’m also extremely proud of the fact that 70% of our students are being paid for their internships.

Employment

This is the big one.  And here are the results:

Image 6

I am so proud of our students, Department, and University.  These numbers are astounding.  In a day and age when people are wondering if a college degree is worth it, I believe this pie chart speaks for itself.  Remember, these are recent graduates.  You know, the kind of people who don’t have the experience to get a job…  In the case of our majors,  over 94% of our students are engaged in something post-graduation (job, graduate school, or service organization).  Breaking things down:

  • almost 60% of our students are already employed within 2 months of graduating
  • 30% of our students are in graduate school – and notice, every one of them are funded!  If this doesn’t say something about the value of a Salisbury University degree in Geography and Geosciences, I don’t know what does.
  • 5% of our students have engaged with a service organization like the Peace Corps., Americorps, or something similar.
  • 10% of our students are still looking for employment.

Over the next few days, I will be slicing these responses up a little more to see what percentage of students who had an internship have a job, or what percentage of students in the GIS track have a job.  But for now, I am very pleased to see that upon graduation, our students are doing quite well in their next step.

Paying for college

paying.jpgThis question gives us a lot to think about.  Another question students are asking is how am I going to pay for college.  I have a series of blog posts on that here.  I am very happy to see that over 50% of our students have scholarships, but I want that number to be higher.  Again, without this kind of survey, we would have no idea what our student are doing with regard to paying for college.  Now, we have some actionable items, and the one at the top of my list is to see us increase scholarship opportunities.  Also, with 73% of our students taking on loans, I would like to see that number reduced.

Final Thoughts

I was so pleased to write this blog post.  I can now answer my President when he asks how students are doing in obtaining their first job:

quite well, Mr. President, and now we have some other things to think about to make it even better.

 

Finding “Dangles” with PostGIS

Do you have a set of lines that you need to determine if there are any “dangle” nodes?  A dangle is a line segment that overhangs another line segment.  Now, some dangles are valid, like a pipe that terminates in a cul-de-sac.

A few people have posted about this already, but I figured I would give it a shot as well, as I think my SQL is a little more terse.  Anyway, here is the query, and we’ll talk about it line by line:

SELECT DISTINCT g1 ASINTO dangles
FROM plines, 
    (SELECT g AS g1 FROM  
         (SELECT g, count(*) AS cnt  
          FROM  
              (SELECT  ST_StartPoint(g) AS g FROM plines
               UNION ALL
               SELECT  ST_EndPoint(g) AS g FROM plines ) AS T1 
         GROUP BY g) AS T2
     WHERE cnt = 1) AS T3
WHERE ST_Distance(g1, g) BETWEEN 0.01 AND 2;

The first thing to notice is the most inner select statement.  We are using ST_StartPoint and ST_EndPoint to grab the endpoints of the lines – these we’ll call nodes.

The next line to notice is where we are getting the count of the nodes.  We are grabbing all the nodes, but using the GROUP BY function to return the number of nodes that occupy a place in space.  Now, an intersection of two lines will have 2 nodes (from the first line and the second line).  But, a “dangle” will only have one node occupying a space.  This is where the next section of SQL comes into play.

What we want to do is only select those nodes where the count (cnt) is equal to 1.  That means the node is just sitting there in space.  It is a “dangle”.  But, not all dangles are created equally, as I said above.  That final WHERE clause lets me specify how far I want a dangle node to be from another node.  In the example above, we are choosing under 2m apart.  The last bit of SQL we have to consider is the DISTINCT clause.  Nodes can be near one or more lines.  We don’t want to double count them, so using DISTINCT eliminates the duplicates.

That’s it!  Pretty easy.  Think of the ST_Distance function as a variant of the basic SQL to find dangles.  There are other variants we could add to this if we’d like, such as the length of the line the dangle touches has to be less than 5m, or something like that.  That would be just a matter of adding another WHERE clause.

 

 

Multi-ring (non-overlapping) Buffers with Manifold 9

In my last post, I showed you how to create multi-ring, non-overlapping buffers with spatial SQL in PostGIS.  In this post, I want to do the same thing, but with Manifold GIS.  To be honest, it is pretty much the same thing, although, I think Manifold is a little easier because they utilize a FIRST aggregate function in SQL, where in PostGIS, we had to use a DISTINCT ON.  Either way, it is pretty easy, so now database and SQL professionals have a way to create multi-ring buffers entirely in SQL.

 If you want to learn more about SQL, programming, open source GIS, or Manifold GIS, check out courses at www.gisadvisor.com.  

Multi-Ring (non-overlapping) Buffers with PostGIS

I was interested in creating mult-ring buffers but with a twist: I didn’t want the buffers to overlap one another.  In other words, if I had concentric buffers with distances of 100, 200, and 300 around a point, I want those buffers to reflect distances of 0-100, 100-200, and 200-300.  I don’t want them overlapping one another.  You can actually do that with the PostGIS function ST_SymDifference, but there are a few nuances that you have to be aware of.

Unlike some of my longer videos, this one will start out with the answer, and then we’ll walk through all the SQL.  You’ll see it isn’t so bad.  And, you continue to see that spatial is not special!.  It’s only 20 minutes long, but the answer is shown in the first minute.

In the video I’ll slowly walk you through all the spatial SQL to create buffers for the points and trim all the overlaps so that there are no overlapping buffers.  You’ll learn some really cool Postgres commands  including:

 ST_BufferST_DifferenceSymDISTINCT ON, and SET WITH OIDS.

I found myself amazed that with a few SQL tweaks, we were able to turn ordinary buffers to more useful non-overlapping buffers.  I hope you enjoy the video.

I’d like to create more videos like that – please leave so comments below so that I know others want me to continue these kinds of tutorials.

 If you want to learn more about SQL, programming, open source GIS, or Manifold GIS, check out courses at www.gisadvisor.com.  

GIS Analysis of Overlapping Layers

overlayoverlapMy friend is attempting to quantify the area of different landuse values for different areas that are upstream from her sample points.  This means she needs sample points, landuse, and upstream areas (i.e. sub-watersheds).  The problem is, her watersheds overlap, the buffer distances around the sample points overlap themselves AND the watersheds, and she then needs to summarize the results.  It’s actually a tricky problem due to the overlaps: GIS software doesn’t really like when features within a single layer overlap one another.  Also, if a buffer for a sample point overlaps two different watersheds, that becomes tricky too.

Sure you can solve it with a few for loops,  inserting the results into a new table, but that really is a hassle.  Also, I have to do it for different distances and different land cover types.

So, I once again turned to SQL – remember what I keep telling you – spatial is not special.  It’s just another data type.  This video steps you through performing a multi-ring buffer on overlapping objects from 3 different layers: sample points, watersheds, and land use.  As we step through the SQL, you’ll see how easy it is to put the query together.  And, at the end, you’ll see how flexible the query is should you want to change your objectives.  And, for good measure, we’ll throw in a little bit of parallel processing.

5(6) Ways to get data into Postgres/PostGIS

Lots of people ask me how to get data into Postgis.  This video is a quick 8 minute demonstration of 6 ways to get data into Postgres – 4 free (PGAdmin III, shp2pgsql, ogr2ogr, QGIS Database Manager) and two commercial.   (Manifold 8, Manifold 9).

Now, in this case I’m only working with a shapefile and Postgres.  I will do another post that works with other data (geodatabases, Autocad, etc.), and other databases (SQL Server, Oracle, etc.).  I hope you enjoy it.  Also, if you have other ways you like getting data into Postgis, leave a comment below….

 

 

want to learn how to use PostGIS, QGIS, Manifold, or other advanced GIS tools.  Check out my courses here.  Some of the catchy titles are Big Data Analytics with GISManage Spatial Data with Microsoft SQL ServerEnterprise GIS with Open Source SoftwareSpatial SQL With PostGIS , and Python for Geospatial.