{ "cells": [ { "cell_type": "code", "execution_count": 1, "id": "5712e557", "metadata": {}, "outputs": [], "source": [ "# You don't need to run this--it's just a workaround so images show up in these Github Jupyter notebooks\n", "from dapper.utils.utils import display_image_gh_notebook\n", "from IPython.display import HTML" ] }, { "cell_type": "markdown", "id": "8be352fc-1d78-4e9a-a476-c4ce9957ef7a", "metadata": {}, "source": [ "## 5. Generating Topounits\n", "This notebook demonstrates how to generate topounits using a couple of different approaches. `dapper` uses Google Earth Engine to compute topounits and export polygons representing your topounits. \n", "\n", "Topounits are effectively delineations of a grid cell within ELM. Within ELM, Topounits provide the capability to pass lateral (hydrologic) fluxes between themselves, which is a capability that does not exist between grid cells. Some parameters in ELM can be provided at the Topounit scale, so their delineation is necessary in order to sample these parameters appropriately.\n", "\n", "### How are Topounits defined?\n", "\n", "The idea driving Topounits is that different portions of a landscape (grid cell) may undergo different trajectories of evolution. Topounits are intended to allow a representation of this within-cell heterogeneity. However, there is no prescribed definition for Topounits, and different definitions may be relevant for different types of ELM experiments. Given that Topounits allow for lateral flux exchanges, it makes sense to develop some simple definitions based on digital elevation models (DEMs). Aspect may also dictate the dynamics of landscape trajectories in the Arctic due to differential solar radiations on North versus South facing slopes, for example. `dapper` currently contains two ways to define Topounits: elevation bands and elevation bands + aspect. Other methods are possible, so if you have ideas please open an issue in this repo or reach out. It could be possible to delineate topounits based on historic meteorological data, snow observations, height above nearest drainage, etc. One of the \"hard\" constraints is that the maximum number of topounits per grid cell is 11. As far as I understand, this was an E3SM-required constraint, but I also believe that it's possible to relax this number in offline ELM runs (don't quote me). \n", "\n", "### This notebook demonstrates:\n", "\n", "\n", "\n", "and discusses some of the options you have and potential pitfalls along the way.\n", "\n", "
\n", "💡 Note: You need a Google Earth Engine account to run this notebook.\n", "
" ] }, { "cell_type": "markdown", "id": "01f9cb80", "metadata": {}, "source": [ "### A. Generating Topounits for the Kuparuk watershed with elevation-only bins\n" ] }, { "cell_type": "code", "execution_count": null, "id": "e53bb136", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Using scale of 1317.3008196800217 m for sampling elevations.\n", "Success! FeatureCollection loaded as GeoDataFrame.\n" ] } ], "source": [ "import ee\n", "import pandas as pd\n", "import geopandas as gpd\n", "from shapely.geometry import box\n", "from dapper.utils import gee_utils as gu\n", "from dapper.topounit import gen_topounits as tu\n", "\n", "ee.Initialize(project='ee-jonschwenk') # use your Google Cloud project account instead of mine\n", "\n", "# We need to provide an ee.Feature feature to our function, so let's load an asset I've shared that is the Kuparuk watershed\n", "# Look at C. for an example of how to create your own ee.Feature.\n", "feature = ee.Feature(ee.FeatureCollection('projects/ee-jonschwenk/assets/E3SM/Kuparuk_gageshed').first())\n", "method = 'epercentiles' # specifies we want to use percentiles of elevation to define the Topounits\n", "n_elev_bins = 10 # number of bins to use; for this method corresponds to the number of Topounits\n", "dem_source = 'arcticdem' # this is the default, but putting it here so you know we're using the ArcticDEM to do the binning\n", "return_as = 'gdf' # specifies we want the Topounits to be returned as a GeoDataFrame; we could also choose \"asset\" to have them exported to our Google Drive\n", "export_scale = 500 # in meters. This is a big watershed, so using pixels of 500 square meters should be a nice balance\n", "\n", "topounits = tu.make_topounits(feature, \n", " method = method, \n", " n_elev_bins = n_elev_bins, \n", " dem_source = dem_source, \n", " return_as = return_as,\n", " export_scale = export_scale,\n", " verbose = True) # So we get information about what's going on)" ] }, { "cell_type": "markdown", "id": "155200d9", "metadata": {}, "source": [ "Let's unpack the messages that were printed. \n", "\n", "First, we see that a scale of `1317.3...` meters was used for sampling elevations. Under the hood, `dapper` automatically computes the scale at which to sample the underlying DEM. This is based on the native resolution of the DEM and the size of the feature (polygon) we're sampling over. `dapper` tries to make sure there are roughly 500 pixels per bin for a good statistical representation. The *actual* scale that GEE uses will be near 1317 meters, but not exactly as it depends on how GEE has pyramided the DEM. This is not something that `dapper` allows you to control, but it could be important to know depending on your application.\n", "\n", "Second, we see a `Success! ...` message. You need to understand a little about how the Google Earth Engine API works for this to mean anything to you. Essentially, GEE will do lighter export tasks \"on the fly,\" without spinning up an actual `GEE Task`. However, for larger export tasks, this attempt to directly retrieve data will fail. If that happens, `dapper` will automatically spin up a `GEE Task` for you and give you a message. What controls the ability to directly download results? It's the size of the \"thing\" that you're trying to download. In our case, `dapper` has created a set of 10 Topounits that are defined by polygons (or multipolygons, actually), and we need GEE to send back the vertices of these polygons. We can control the \"resolution\" of these polygons by specifying differenct `export_scale` values. I selected one above that I knew would export just fine. Let's select a much lower scale and see what happens:\n", "\n" ] }, { "cell_type": "code", "execution_count": 7, "id": "2ab509ca", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Using scale of 1317.3008196800217 m for sampling elevations.\n", "Success! FeatureCollection loaded as GeoDataFrame.\n" ] } ], "source": [ "topounits = tu.make_topounits(feature, \n", " method = method, \n", " n_elev_bins = n_elev_bins, \n", " dem_source = dem_source, \n", " return_as = return_as,\n", " export_scale = 2,\n", " verbose = True) # So we get information about what's going on)" ] }, { "cell_type": "markdown", "id": "0a8b5c5a", "metadata": {}, "source": [ "Ok, well that's not what I expected--it worked just fine. We'll try to \"break\" this in the next example, but we were successfully able to return the Topounits without needing to export them as an asset, even at a 2 meter resolution!\n", "\n", "
\n", "💡 Note: Keep in mind that you might get a different result using the same parameters--the amount GEE is willing to send back depends on its current demands, which changes dynamically.\n", "
" ] }, { "cell_type": "markdown", "id": "34a88271", "metadata": {}, "source": [ "Ok, now let's do a little plotting. `topounits` is a `GeoDataFrame` so we can easily export it as a shapefile, geopackage, geojson, whatever and look at it in a GIS." ] }, { "cell_type": "code", "execution_count": 9, "id": "ca7fae6f", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " geometry max_elev \\\n", "0 MULTIPOLYGON (((-149.66645 69.7141, -149.66643... 70.699463 \n", "1 MULTIPOLYGON (((-150.01059 69.54603, -150.0105... 100.270477 \n", "2 MULTIPOLYGON (((-149.01473 69.55271, -149.0147... 126.762054 \n", "3 MULTIPOLYGON (((-150.02924 69.38755, -150.0292... 157.339340 \n", "4 MULTIPOLYGON (((-150.22777 69.2547, -150.2277 ... 197.389450 \n", "5 MULTIPOLYGON (((-150.14548 69.16701, -150.1454... 236.533356 \n", "6 MULTIPOLYGON (((-150.08549 69.06645, -150.0854... 289.752777 \n", "7 MULTIPOLYGON (((-149.30842 69.02476, -149.3084... 393.455048 \n", "8 MULTIPOLYGON (((-149.60804 68.75233, -149.6079... 602.020996 \n", "9 MULTIPOLYGON (((-149.61361 68.75208, -149.6136... 100000.000000 \n", "\n", " min_elev topounit_id \n", "0 0.000000 1 \n", "1 70.699463 2 \n", "2 100.270477 3 \n", "3 126.762054 4 \n", "4 157.339340 5 \n", "5 197.389450 6 \n", "6 236.533356 7 \n", "7 289.752777 8 \n", "8 393.455048 9 \n", "9 602.020996 10 \n" ] } ], "source": [ "print(topounits)\n", "topounits.to_file(r'X:\\Research\\NGEE Arctic\\6. Topounits\\kuparuk_topounits_2m.gpkg', driver='GPKG') # Set your path accordingly" ] }, { "cell_type": "markdown", "id": "654583b5", "metadata": {}, "source": [ "One thing to point out is that the `topounits` GeoDataFrame also contains columns (`max_elev` and `min_elev`) that define the elevation limits for each band. Also, a `topounit_id` is provided for convenience, but has no real meaning other than being a unique id.\n", "\n", "I've pulled `kuparuk_topounits_2m.gpkg` into QGIS and done some coloring by elevation; here's what it looks like:" ] }, { "cell_type": "code", "execution_count": 12, "id": "f857cf32", "metadata": {}, "outputs": [ { "data": { "text/html": [ "default" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "display(HTML(display_image_gh_notebook('5A-kuparuk-topounits-elevationbins.PNG')))" ] }, { "cell_type": "markdown", "id": "3d0ce1e5", "metadata": {}, "source": [ "Darker reds are higher elevation Topounits. We see that we have 10 different colors here, each with approximately the same area. We also see that the edges of these Topunits are kind of noisy:" ] }, { "cell_type": "code", "execution_count": 13, "id": "024a28a9", "metadata": {}, "outputs": [ { "data": { "text/html": [ "default" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "display(HTML(display_image_gh_notebook('5A-kuparuk-topounits-elevationbins-zoom.PNG')))" ] }, { "cell_type": "markdown", "id": "deea6118", "metadata": {}, "source": [ "This \"noise\" isn't actually noise in the sense of random error, but it's due to the fact that I selected 2 meters as my export scale. As we choose larger export scales, our boundaries will become smoother. You may or may not care about the \"noise\", but it's good to know how to control it just in case.\n", "\n", "Ok, let's move on to making Topounits that consider elevation *and* aspect." ] }, { "cell_type": "markdown", "id": "22baf2e1", "metadata": {}, "source": [ "### B. Generating Topounits for the Kuparuk watershed with elevation bins and aspect\n", "\n", "This example is really similar to `A`, except we need to define an aspect range and specify some of the `make_topounits` parameters a little differently." ] }, { "cell_type": "code", "execution_count": 15, "id": "504dcee1", "metadata": {}, "outputs": [ { "ename": "ValueError", "evalue": "20 topounits exceed max of 11.", "output_type": "error", "traceback": [ "\u001b[31m---------------------------------------------------------------------------\u001b[39m", "\u001b[31mValueError\u001b[39m Traceback (most recent call last)", "\u001b[36mCell\u001b[39m\u001b[36m \u001b[39m\u001b[32mIn[15]\u001b[39m\u001b[32m, line 17\u001b[39m\n\u001b[32m 14\u001b[39m return_as = \u001b[33m'\u001b[39m\u001b[33mgdf\u001b[39m\u001b[33m'\u001b[39m \u001b[38;5;66;03m# specifies we want the Topounits to be returned as a GeoDataFrame; we could also choose \"asset\" to have them exported to our Google Drive\u001b[39;00m\n\u001b[32m 15\u001b[39m export_scale = \u001b[32m500\u001b[39m \u001b[38;5;66;03m# in meters. This is a big watershed, so using pixels of 500 square meters should be a nice balance\u001b[39;00m\n\u001b[32m---> \u001b[39m\u001b[32m17\u001b[39m topounits = \u001b[43mtu\u001b[49m\u001b[43m.\u001b[49m\u001b[43mmake_topounits\u001b[49m\u001b[43m(\u001b[49m\u001b[43mfeature\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\n\u001b[32m 18\u001b[39m \u001b[43m \u001b[49m\u001b[43mmethod\u001b[49m\u001b[43m \u001b[49m\u001b[43m=\u001b[49m\u001b[43m \u001b[49m\u001b[43mmethod\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\n\u001b[32m 19\u001b[39m \u001b[43m \u001b[49m\u001b[43mn_elev_bins\u001b[49m\u001b[43m \u001b[49m\u001b[43m=\u001b[49m\u001b[43m \u001b[49m\u001b[43mn_elev_bins\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\n\u001b[32m 20\u001b[39m \u001b[43m \u001b[49m\u001b[43mdem_source\u001b[49m\u001b[43m \u001b[49m\u001b[43m=\u001b[49m\u001b[43m \u001b[49m\u001b[43mdem_source\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\n\u001b[32m 21\u001b[39m \u001b[43m \u001b[49m\u001b[43mreturn_as\u001b[49m\u001b[43m \u001b[49m\u001b[43m=\u001b[49m\u001b[43m \u001b[49m\u001b[43mreturn_as\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m 22\u001b[39m \u001b[43m \u001b[49m\u001b[43mexport_scale\u001b[49m\u001b[43m \u001b[49m\u001b[43m=\u001b[49m\u001b[43m \u001b[49m\u001b[43mexport_scale\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m 23\u001b[39m \u001b[43m \u001b[49m\u001b[43maspect_ranges\u001b[49m\u001b[43m \u001b[49m\u001b[43m=\u001b[49m\u001b[43m \u001b[49m\u001b[43maspect_ranges\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;66;43;03m# Note we added this\u001b[39;49;00m\n\u001b[32m 24\u001b[39m \u001b[43m \u001b[49m\u001b[43mverbose\u001b[49m\u001b[43m \u001b[49m\u001b[43m=\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;28;43;01mTrue\u001b[39;49;00m\u001b[43m)\u001b[49m \u001b[38;5;66;03m# So we get information about what's going on)\u001b[39;00m\n", "\u001b[36mFile \u001b[39m\u001b[32mX:\\Research\\NGEE Arctic\\dapper\\src\\dapper\\gee\\topounits.py:161\u001b[39m, in \u001b[36mmake_topounits\u001b[39m\u001b[34m(feature, method, max_topounits, n_elev_bins, aspect_ranges, dem_source, return_as, export_scale, asset_name, asset_ftype, verbose)\u001b[39m\n\u001b[32m 159\u001b[39m total_topounits = n_elev_bins * num_aspect_bins\n\u001b[32m 160\u001b[39m \u001b[38;5;28;01mif\u001b[39;00m total_topounits > max_topounits:\n\u001b[32m--> \u001b[39m\u001b[32m161\u001b[39m \u001b[38;5;28;01mraise\u001b[39;00m \u001b[38;5;167;01mValueError\u001b[39;00m(\u001b[33mf\u001b[39m\u001b[33m\"\u001b[39m\u001b[38;5;132;01m{\u001b[39;00mtotal_topounits\u001b[38;5;132;01m}\u001b[39;00m\u001b[33m topounits exceed max of \u001b[39m\u001b[38;5;132;01m{\u001b[39;00mmax_topounits\u001b[38;5;132;01m}\u001b[39;00m\u001b[33m.\u001b[39m\u001b[33m\"\u001b[39m)\n\u001b[32m 162\u001b[39m \u001b[38;5;28;01melse\u001b[39;00m:\n\u001b[32m 163\u001b[39m \u001b[38;5;28;01mraise\u001b[39;00m \u001b[38;5;167;01mValueError\u001b[39;00m(\u001b[33mf\u001b[39m\u001b[33m\"\u001b[39m\u001b[33mUnsupported method: \u001b[39m\u001b[38;5;132;01m{\u001b[39;00mmethod\u001b[38;5;132;01m}\u001b[39;00m\u001b[33m\"\u001b[39m)\n", "\u001b[31mValueError\u001b[39m: 20 topounits exceed max of 11." ] } ], "source": [ "# We will use a new \"method\" for computing Topounits\n", "method = 'elevaspect' # specifies we want to use percentiles of elevation to define the Topounits\n", "\n", "# We also need to provide aspect ranges to use. These are in the form (start, end, label).\n", "# Here, let's consider only North and South. So evertyhing from 270 (West) to 90 (East) will be North,\n", "# and everything from 90 (East) to 270 (West) will be South. This assumes that we move around the \n", "# compass in a clockwise direction. Our aspect ranges therefore look like this:\n", "aspect_ranges = [(270, 90, 'N'), (90.01, 269.99, 'S')]\n", "\n", "# All the other parameters are the same\n", "feature = ee.Feature(ee.FeatureCollection('projects/ee-jonschwenk/assets/E3SM/Kuparuk_gageshed').first())\n", "n_elev_bins = 10 # number of bins to use; for this method corresponds to the number of Topounits\n", "dem_source = 'arcticdem' # this is the default, but putting it here so you know we're using the ArcticDEM to do the binning\n", "return_as = 'gdf' # specifies we want the Topounits to be returned as a GeoDataFrame; we could also choose \"asset\" to have them exported to our Google Drive\n", "export_scale = 500 # in meters. This is a big watershed, so using pixels of 500 square meters should be a nice balance\n", "\n", "topounits = tu.make_topounits(feature, \n", " method = method, \n", " n_elev_bins = n_elev_bins, \n", " dem_source = dem_source, \n", " return_as = return_as,\n", " export_scale = export_scale,\n", " aspect_ranges = aspect_ranges, # Note we added this\n", " verbose = True) # So we get information about what's going on)" ] }, { "cell_type": "markdown", "id": "9cc6226f", "metadata": {}, "source": [ "I intentionally chose this arrangement to demonstrate something you need to consider. Right now, the maximum number of Topounits that can be defined for a single grid cell is `11`. This is an E3SM constraint that `dapper` builds in by default. You can change this maximum by specifying `max_topounits` when calling `make_topounits`. However, let's respect the 11 here and figure out what went wrong. We asked for 10 elevation bins, and two aspect bins. Each elevation bin will be broken up into the aspect bins, so we've requested 10 (elevation bins) * 2 (aspect bins) which equals 20 Topounits, which is greater than 11. Let's adjust our parameters to make sure that the total number of Topounits does not exceed 11." ] }, { "cell_type": "code", "execution_count": null, "id": "8ccd6cdd", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Using scale of 1317.3008196800217 m for sampling elevations.\n", "Success! FeatureCollection loaded as GeoDataFrame.\n" ] } ], "source": [ "# Let's reduce the number of elevation bins to 5, so we end up with 10 total Topounits\n", "n_elev_bins = 5 # number of bins to use; for this method corresponds to the number of Topounits\n", "export_scale = 50 # just for fun\n", "topounits = tu.make_topounits(feature, \n", " method = method, \n", " n_elev_bins = n_elev_bins, \n", " dem_source = dem_source, \n", " return_as = return_as,\n", " export_scale = export_scale,\n", " aspect_ranges = aspect_ranges, # Note we added this\n", " verbose = True) # So we get information about what's going on" ] }, { "cell_type": "markdown", "id": "9d863a13", "metadata": {}, "source": [ "Great, it worked. Let's look at these Topounits in a GIS." ] }, { "cell_type": "code", "execution_count": 21, "id": "ee570358", "metadata": {}, "outputs": [ { "data": { "text/html": [ "default" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "display(HTML(display_image_gh_notebook('5B-kuparuk-elevaspect-wzoom.PNG')))" ] }, { "cell_type": "markdown", "id": "f3ba6457", "metadata": {}, "source": [ "The above image contains 10 total Topounits. The five red colors distinguish the elevation bins, while the black lines show topounits that are predominately North facing. A zoom view is shown to illustrate the spatial heterogeneity of Topounits. Areas without hashing represent predominately South facing slopes. You can see from the left plot that the watershed predominately faces North, but there are definitely some South-ish faces. Note that while elevation binning puts the same amount of area in each bin (so each Topounit is about the same size), aspect binning must adhere to the actual aspect distribution so you can end up with very unbalanced (in terms of area) Topounits. This might not matter for your application, but it's good to be aware." ] }, { "cell_type": "markdown", "id": "2118f2ad", "metadata": {}, "source": [ "### C. Generating Topounits for 0.5 degree grid cells\n" ] }, { "cell_type": "markdown", "id": "b7cda61d", "metadata": {}, "source": [ "Here we want to consider the case where our domains are not watersheds, as it will highlight some differences. We'll create four 0.5 x 0.5 degree grid cells and delineate some Topounits for them." ] }, { "cell_type": "code", "execution_count": null, "id": "64fd3bcd", "metadata": {}, "outputs": [], "source": [ "# Define a top-left coordinate\n", "lat = 68.5\n", "lon = -149.5\n", "degrees_per_cell = 0.5\n", "n_steps = 2 # how many steps in each dimension (2x2=4 total grid cells)\n", "\n", "# Construct the polygons. \n", "polygons = []\n", "for i in range(n_steps):\n", " for j in range(n_steps):\n", " minx = lon + j * degrees_per_cell\n", " maxx = minx + degrees_per_cell\n", " maxy = lat - i * degrees_per_cell\n", " miny = maxy - degrees_per_cell\n", " polygons.append(box(minx, miny, maxx, maxy))\n", "\n", "# Put the cells in a GeoDataFrame and assign the CRS\n", "gdf = gpd.GeoDataFrame(geometry=polygons, crs=\"EPSG:4326\")\n", "gdf['gid'] = [0,1,2,3] # Assign an id so we can keep track of the grid cells" ] }, { "cell_type": "markdown", "id": "891a8479", "metadata": {}, "source": [ "Now we have our grid cells, we can make Topounits as we did in A and B. For this example, I'll choose three elevation bins and four aspect bins. Note that this equates to 12 Topounits per grid cell, so we'll need to update the `max_topounits` parameter to allow suprassing the default of 12." ] }, { "cell_type": "code", "execution_count": null, "id": "b77ee8a9", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Processing grid cell 0...\n", "Using scale of 436.92282313300177 m for sampling elevations.\n", "Success! FeatureCollection loaded as GeoDataFrame.\n", "Processing grid cell 1...\n", "Using scale of 436.92282313299023 m for sampling elevations.\n", "Success! FeatureCollection loaded as GeoDataFrame.\n", "Processing grid cell 2...\n", "Using scale of 441.6672291277392 m for sampling elevations.\n", "Success! FeatureCollection loaded as GeoDataFrame.\n", "Processing grid cell 3...\n", "Using scale of 441.6672291277283 m for sampling elevations.\n", "Success! FeatureCollection loaded as GeoDataFrame.\n" ] } ], "source": [ "method = 'elevaspect' \n", "aspect_ranges = [\n", " (315, 45, 'N'), # wraps around 0°, centered on 0° (North)\n", " (45, 135, 'E'), # centered on 90° (East)\n", " (135, 225, 'S'), # centered on 180° (South)\n", " (225, 315, 'W') # centered on 270° (West)\n", "]\n", "n_elev_bins = 3 # number of bins to use; for this method corresponds to the number of Topounits\n", "dem_source = 'arcticdem' # this is the default, but putting it here so you know we're using the ArcticDEM to do the binning\n", "return_as = 'gdf' # specifies we want the Topounits to be returned as a GeoDataFrame; we could also choose \"asset\" to have them exported to our Google Drive\n", "export_scale = 500 # in meters. This is a big watershed, so using pixels of 500 square meters should be a nice balance\n", "max_topounits = 12 # need to include this now!\n", "\n", "tus = {} # storage for our Topounit gdfs\n", "# Loop through each grid cell and compute Topounits\n", "for _, row in gdf.iterrows():\n", " this_feature = ee.Feature(gu.parse_geometry_object(row['geometry'], row['gid']).first()) # This is awkward and might change later\n", " print(f\"Processing grid cell {row['gid']}...\")\n", " tus[row['gid']] = tu.make_topounits(this_feature, \n", " method = method, \n", " n_elev_bins = n_elev_bins, \n", " dem_source = dem_source, \n", " return_as = return_as,\n", " export_scale = export_scale,\n", " aspect_ranges = aspect_ranges,\n", " max_topounits = max_topounits,\n", " verbose = True) # So we get information about what's going on\n", " \n", "# Now we combine the grid cells into a single GeoDataFrame, adding a 'gid' column to keep track of which geometries come from which grid cell\n", "gdf_combined = gpd.GeoDataFrame(\n", " pd.concat(\n", " [df.assign(gid=gid) for gid, df in tus.items()],\n", " ignore_index=True\n", " ),\n", " crs=next(iter(tus.values())).crs) # inherit CRS from any gdf\n", "\n", "# Export (set your path as you want)\n", "gdf_combined.to_file(r'X:\\Research\\NGEE Arctic\\6. Topounits\\four_grid_cells.gpkg')\n" ] }, { "cell_type": "code", "execution_count": 53, "id": "8395054a", "metadata": {}, "outputs": [ { "data": { "text/html": [ "default" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "display(HTML(display_image_gh_notebook('5C-elevation-aspect-4gridcells.PNG')))" ] }, { "cell_type": "markdown", "id": "ab947c76", "metadata": {}, "source": [ "I struggled to figure out how to plot these and show both the elevation and aspect differences, so I just broke them out in the above images. Some things to note about these results: \n", "- Because we ran each grid cell individually, the elevation ranges are different for each one (but there are only 3 ranges in each cell)\n", "- I used a low resolution export for speed--these could look much more \"realistic\" if we chose a smaller scale\n", "- There are gaps between the grid cells that were not covered by Topounits. These can be closed in a number of ways, depending on what you're doing. If you want to keep the same processing technique we used here (4 separate calls to `make_topounits`), you could buffer the grid cells a little, then clip the Topounits on the back end. If you are OK with one call to `make_topounits`, you can just do the entire domain, then separate it into the constituent grid cells on the back end.\n", " \n", "Below is an image of all the Topounits, colored by `topounit_id` (there are 12 ids for each grid cell). Remember that `topounit_id=1` *is not the same* for all four grid cells because the elevation bands were recomputed for each grid cell." ] }, { "cell_type": "code", "execution_count": 55, "id": "b1792839", "metadata": {}, "outputs": [ { "data": { "text/html": [ "default" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "display(HTML(display_image_gh_notebook('5C-all-topounits-4gridcells.PNG')))" ] }, { "cell_type": "markdown", "id": "2f05a25c", "metadata": {}, "source": [ "Below is an image of the properties table of the Topounit GeoDataFrame we exported to show you that each Topounit has all the properties needed to identify which elevantion bin and/or aspect range it represents." ] }, { "cell_type": "code", "execution_count": 56, "id": "516c0638", "metadata": {}, "outputs": [ { "data": { "text/html": [ "default" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "display(HTML(display_image_gh_notebook('5C-topounit-attribute-table.PNG')))" ] } ], "metadata": { "kernelspec": { "display_name": "dapper", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.12.9" } }, "nbformat": 4, "nbformat_minor": 5 }