User API

class afwizard.DataSet(filename=None, spatial_reference=None)

The main class that represents a Lidar data set.

The DataSet class performs lazy loading - instantiating an object of this type does not trigger memory intense operations until you do something with the dataset that requires such operation.

Parameters:
  • filename (str) – Filename to load the dataset from. The dataset is expected to be in LAS/LAZ 1.2-1.4 format. If an absolute filename is given, the dataset is loaded from that location. Relative paths are interpreted (in this order) with respect to the directory set with set_data_directory(), the current working directory, XDG data directories (Unix only) and the Python package installation directory.

  • spatial_reference (str) – A spatial reference as WKT or EPSG code. This will override the reference system found in the metadata and is required if no reference system is present in the metadata of the LAS/LAZ file. If this parameter is not provided, this information is extracted from the metadata.

classmethod convert(dataset)

Convert this dataset to an instance of DataSet

This is used internally to convert datasets between different representations.

Returns:

A dataset with transformed datapoints.

Return type:

afwizard.DataSet

rasterize(resolution=0.5, classification=None)

Create a digital terrain model from the dataset

It is important to note that for archaeologic applications, the mesh is not a traditional DEM/DTM (Digitial Elevation/Terrain Model), but rather a DFM (Digital Feature Model) which consists of ground and all potentially relevant structures like buildings etc. but always excludes vegetation.

Parameters:
  • resolution (float) – The mesh resolution in meters. Adapt this depending on the scale of the features you are looking for and the point density of your Lidar data.

  • classification (tuple) – The classification values to include into the written mesh file.

restrict(segmentation=None, segmentation_overlay=None)

Restrict the data set to a spatial subset

This is of vital importance when working with large Lidar datasets in AFwizard. The interactive exploration process for filtering pipelines requires a reasonably sized subset to allow fast previews.

Parameters:
  • segmentation – A segmentation object that provides the geometric information for the cropping. If omitted, an interactive selection tool is shown in Jupyter.

  • segmentation_overlay – A segmentation object that will be overlayed on the map for easier use of the restrict app.

Type:

afwizard.segmentation.Segmentation

Type:

afwizard.segmentation.Segmentation

save(filename, overwrite=False)

Store the dataset as a new LAS/LAZ file

This method writes the Lidar dataset represented by this data structure to an LAS/LAZ file. This includes the classification values which may have been overriden by a filter pipeline.

Parameters:
  • filename (str) – Where to store the new LAS/LAZ file. You can either specify an absolute path or a relative path. Relative paths are interpreted w.r.t. the current working directory.

  • overwrite (bool) – If this parameter is false and the specified filename does already exist, an error is thrown. This is done in order to prevent accidental corruption of valueable data files.

Returns:

A dataset object wrapping the written file

Return type:

afwizard.DataSet

show(visualization_type='hillshade', **kwargs)

Visualize the dataset in JupyterLab

Several visualization options can be chosen via the visualization_type parameter. Some of the arguments given below are only available for specific visualization types. To explore the visualization capabilities, you can also use the interactive user interface with show_interactive().

Parameters:
  • visualization_type (str) – Which visualization to use. Current implemented values are hillshade for a greyscale 2D map, slopemap for a 2D map color-coded by the slope and blended_hillshade_slope which allows to blend the former two into each other.

  • classification (tuple) – Which classification values to include into the visualization. By default, all classes are considered. The best interface to provide this information is using afwizard.asprs.

  • resolution (float) – The spatial resolution in meters.

  • azimuth (float) – The angle in the xy plane where the sun is from [0, 360] (hillshade and blended_hillshade_slope only)

  • angle_altitude – The angle altitude of the sun from [0, 90] (hillshade and blended_hillshade_slope only)

  • alg (str) – The hillshade algorithm to use. Can be one of Horn and ZevenbergenThorne. (hillshade and blended_hillshade_slope only)

  • blending_factor (float) – The blending ratio used between hillshade and slope map from [0, 1]. (blended_hillshade_slope only)

show_interactive()

Visualize the dataset with interactive visualization controls in Jupyter

afwizard.add_filter_library(path=None, package=None, recursive=False, name=None)

Add a custom filter library to this session

Adaptivefiltering keeps a list of filter libraries that it browses for filter pipeline definitions. This function adds a new directory to that list. You can use this to organize filter files on your hard disk.

Parameters:
  • path (str) – The filesystem path where the filter library is located. The filter library is a directory containing a number of filter files and potentially a library.json file containing metadata.

  • package (str) – Alternatively, you can specify a Python package that is installed on the system and that contains the relevant JSON files. This is used for afwizards library of community-contributed filter pipelines.

  • recursive (bool) – Whether the file system should be traversed recursively from the given directory to find filter pipeline definitions.

  • name (str) – A display name to override the name provided by library metadata

afwizard.apply_adaptive_pipeline(dataset=None, segmentation=None, pipelines=None, output_dir='output', resolution=0.5, compress=False, suffix='filtered')

Python API to apply a fully configured adaptive pipeline

This function implements the large scale application of a spatially adaptive filter pipeline to a potentially huge dataset. This can either be used from Python or through AFwizard’s command line interface.

Parameters:
  • datasets (list) – One or more datasets of type ~afwizard.dataset.DataSet.

  • segmentation (afwizard.segmentation.Segmentation) – The segmentation that provides the geometric information about the spatial segmentation of the dataset and what filter pipelines to apply in which segments.

  • output_dir (str) – The output directory to place the generated output in. Defaults to a subdirectory ‘output’ within the current working directory/

  • resolution (float) – The resolution in meters to use when generating GeoTiff files.

  • compress (bool) – Whether to write LAZ files instead of LAS>

  • suffix (str) – A suffix to use for files after applying filtering

afwizard.assign_pipeline(dataset, segmentation, pipelines)

Load a segmentation object with one or more multipolygons and a list of pipelines. Each multipolygon can be assigned to one pipeline.

Parameters:
  • segmentation – This segmentation object needs to have one multipolygon for every type of ground class (dense forrest, steep hill, etc..).

  • pipelines – All pipelines that one wants to link with the given segmentations.

Type:

afwizard.segmentation.Segmentation

Type:

list of afwizard.filter.Pipeline

Returns:

A segmentation object with added pipeline information

Return type:

afwizard.segmentation.Segmentation

afwizard.execute_interactive(dataset, pipeline)

Interactively apply a filter pipeline to a given dataset in Jupyter

This allows you to interactively explore the effects of end user configuration values specified by the filtering pipeline.

Parameters:
  • dataset (afwizard.DataSet) – The dataset to work on

  • pipeline – The pipeline to execute.

Returns:

The pipeline with the end user configuration baked in

Return type:

afwizard.filter.Pipeline

afwizard.load_filter(filename)

Load a filter from a file

This function restores filters that were previously saved to disk using the save_filter() function.

Parameters:

filename (str) – The filename to load the filter from. Relative paths are interpreted w.r.t. the current working directory.

afwizard.load_segmentation(filename, spatial_reference=None)

Load a GeoJSON segmentation from a file

Parameters:
  • filename (str) – The filename to load the GeoJSON file from.

  • spatial_reference – The WKT or EPSG code of the segmentation file.

afwizard.pipeline_tuning(datasets=[], pipeline=None)

The Jupyter UI to create a filtering pipeline from scratch.

The use of this UI is described in detail in the notebook on creating filter pipelines.

Parameters:
  • datasets (list) – One or more instances of Lidar datasets to work on

  • pipeline (afwizard.filter.Pipeline) – A pipeline to use as a starting point. If omitted, a new pipeline object will be created.

Returns:

Returns the created pipeline object

Return type:

afwizard.filter.Pipeline

afwizard.print_version()

Print the current version of AFwizard

afwizard.remove_classification(dataset)

Remove the classification values from a Lidar dataset

Instead, all points will be classified as 1 (unclassified). This is useful to drop an automatic preclassification in order to create an archaelogically relevant classification from scratch.

Parameters:

dataset (afwizard.Dataset) – The dataset to remove the classification from

Returns:

A transformed dataset with unclassified points

Return type:

afwizard.DataSet

afwizard.reproject_dataset(dataset, out_srs, in_srs=None)

Standalone function to reproject a given dataset with the option of forcing an input reference system

Parameters:
  • out_srs (str) – The desired output format in WKT.

  • in_srs (str) – The input format in WKT from which to convert. The default is the dataset’s current reference system.

Returns:

A reprojected dataset

Return type:

afwizard.DataSet

afwizard.reset_filter_libraries()

Reset registered filter libraries to the default ones

The default libraries are the current working directory and the library of community-contributed filter pipelines provided by afwizard.

afwizard.save_filter(filter_, filename)

Save a filter to a file

Filters saved to disk with this function can be reconstructed with the load_filter() method.

Parameters:
  • filter (Filter) – The filter object to write to disk

  • filename – The filename where to write the filter. Relative paths are interpreted w.r.t. the current working directory.

afwizard.select_best_pipeline(dataset=None, pipelines=None)

Select the best pipeline for a given dataset.

The use of this UI is described in detail in the notebook on selecting filter pipelines.

Parameters:
  • dataset (afwizard.DataSet) – The dataset to use for visualization of ground point filtering results

  • pipelines (list) – The tentative list of pipelines to try. May e.g. have been selected using the select_pipelines_from_library tool.

Returns:

The selected pipeline with end user configuration baked in

Return type:

afwizard.filter.Pipeline

afwizard.select_pipeline_from_library(multiple=False)

The Jupyter UI to select filtering pipelines from libraries.

The use of this UI is described in detail in the notebook on filtering libraries.

Parameters:

multiple (bool) – Whether or not it should be possible to select multiple filter pipelines.

Returns:

Returns the selected pipeline object(s)

Return type:

afwizard.filter.Pipeline

afwizard.select_pipelines_from_library()

The Jupyter UI to select filtering pipelines from libraries.

The use of this UI is described in detail in the notebook on filtering libraries.

Returns:

Returns the selected pipeline object(s)

Return type:

afwizard.filter.Pipeline

afwizard.set_current_filter_library(path, create_dirs=False, name='My filter library')

Set a library path that will be used to store filters in

Parameters:
  • path (str) – The path to store filters in. Might be an absolute path or a relative path that will be interpreted with respect to the current working directory.

  • create_dirs (bool) – Whether afwizard should create this directory (and potentially some parent directories) for you

  • name (str) – The display name of the library (e.g. in the selection UI)

afwizard.set_data_directory(directory, create_dir=False)

Set a custom root directory to locate data files

Parameters:
  • directory (str) – The name of the custom data directory.

  • create_dir – Whether AFwizard should create the directory if it does not already exist.

afwizard.set_lastools_directory(dir)

Set custom LASTools installation directory

Use this function at the beginning of your code to point AFwizard to a custom LASTools installation directory. Alternatively, you can use the environment variable LASTOOLS_DIR to do so.

Parameters:

dir (str) – The LASTools installation directory to use

afwizard.set_opals_directory(dir)

Set custom OPALS installation directory

Use this function at the beginning of your code to point AFwizard to a custom OPALS installation directory. Alternatively, you can use the environment variable OPALS_DIR to do so.

Parameters:

dir (str) – The OPALS installation directory to use