internal modules

_corine

Module for querying CORINE land cover classes and calculating mean roughness.

This module provides functions to query CORINE land cover classes based on geographic coordinates and calculate the mean roughness of a specified area.

austaltools._corine.corine_file_help()
austaltools._corine.corine_file_load()
austaltools._corine.mean_roughness(source: str, xg: float, yg: float, h: float, fac=10.0) float

returns the mean roughness of an area based on CORINE land cover classes from either source - web for eea web API or - austal for CORINE inventory from local austal installation

Parameters:
  • source (str) – source of CORINE land cover classes.

  • xg (float) – X-coordinate of the center point.

  • yg (float) – Y-coordinate of the center point.

  • h (float) – Radius of the area to calculate mean roughness.

  • fac (float, optional) – Factor to determine the density of sample points (default is 10).

Returns:

Mean roughness of the specified area.

Return type:

float

austaltools._corine.query_corine_class(lat: float, lon: float) int

Queries the CORINE land cover class for a given latitude and longitude from the EEA web API.

Parameters:
  • lat (float) – Latitude of the location to query.

  • lon (float) – Longitude of the location to query.

Returns:

CORINE land cover class code for the specified location.

Return type:

int

austaltools._corine.roughness_austal(xg: float, yg: float, h: float, fac: int = None) int

Looks up the CORINE land cover class for a given latitude and longitude in the corine data file distributed along with AUSTAL.

Parameters:
  • xg (float) – X-coordinate of the center point.

  • yg (float) – Y-coordinate of the center point.

  • h (float) – Radius of the area to sample points.

  • fac (float, optional) – Factor to determine the density of sample points (default is 10).

Returns:

CORINE land cover class code for the specified location.

Return type:

int

austaltools._corine.roughness_web(xg: float, yg: float, h: float, fac=10.0) float

Calculates the mean roughness of an area based on CORINE land cover classes.

Parameters:
  • xg (float) – X-coordinate of the center point.

  • yg (float) – Y-coordinate of the center point.

  • h (float) – Radius of the area to calculate mean roughness.

  • fac (float, optional) – Factor to determine the density of sample points (default is 10).

Returns:

Mean roughness of the specified area.

Return type:

float

austaltools._corine.sample_points(xg: float, yg: float, h: float, fac: int = None) list

Generates a list of sample points within a specified radius.

Parameters:
  • xg (float) – X-coordinate of the center point.

  • yg (float) – Y-coordinate of the center point.

  • h (float) – Radius of the area to sample points.

  • fac (float, optional) – Factor to determine the density of sample points (default is 10).

Returns:

List of tuples representing the sample points (x, y).

Return type:

list

austaltools._corine.LANDCOVER_CLASSES_Z0_CORINE

Dictionary mapping CORINE class codes [JCR07] to roughness lengths [TAL2002] (in meters).

austaltools._corine.LANDCOVER_CLASSES_Z0_LBM_DE

Dictionary mapping LBM-DE (Digitales Landbedeckungsmodell für Deutschland) class codes to roughness lengths [TAL2021] (in meters).

austaltools._corine.REST_API_URL

URL for the REST API endpoint to query CORINE land cover classes.

_datasets

Module that provides funtions to assembe, download, and handle datasets that serve as input for austaltools

unpack string:

Several funtions make use of the unpack string that describes how to extract data from a (downloaded) file. The syntax is simple:

  • Empty, missing, ‘false’, or ‘tif’ the downloaded file itself is regared as the file.

  • strings starting with ‘zip://’, ‘unzip://’ command unpacking of files matching the glob pattern following ‘://’. Any path contained in this epxression are discarded, all files are extracted to the working diretory.

Example:
zip://data/*.tif

unpack all files from the archive that are in directory data and en on .tif

class austaltools._datasets.DataSet(**kwargs)

Class that describes and handles a dataset

assemble(path, name, replace, args)

Funtion that generates the dataset from the original source.

In an empty Dataset onject, this function is just a placeholder and does nothing.

Parameters:
  • path (str) – path to the storage location where the dataset shall reside

  • name (str) – name of the dataset (short uppercase code)

  • replace (bool) – replace the dataset if it alread exists

  • args (dict) – arguments to the assembling funtion that generates the dataset

Returns:

If the assembly was successful

Return type:

bool

download(path=None, uri=None)

Download assembled dataset from reopository

Parameters:
  • path (str, optional) – path to the storage location where the dataset shall reside. Only needed if the attribute DataSet.path is not set or should be overridden.

  • uri (str, optional) – uri describing the location from where the assembled dataset shall be downloaded. Only needed if the attribute DataSet.uri is not set or should be overridden.

arguments = None

arguments to the assemble funtion that generates the dataset from the original source.

available = False

If dataset is available on the system

file_data = None

name of the file containing the data of the dataset

file_license = None

name of the file containing the license of the dataset

file_notice = None

name of the file containing the notice to be shown if the dataset is used

license = None

source of the license of the dataset

name = ''

ID of the dataset (short uppercase code)

notice = None

text of the notice to be shown

path = None

Path of the storage location where the dataset resides (if available)

position = None

keyword how position is provided

storage = None

Kind of dataset. Also the name of the storage (i.e. subdiretory of the storage location) the dataset is stored in.

uri = None

uri describing the location from where the assembled dataset can be downloaded. Currently supported: http(s):// and doi:// (if such a location exists)

years = []

list of years covered by the dataset (if storage is ‘weather’)

austaltools._datasets.assebmle_GTOPO30(path: str, name='GTOPO30', replace=False, args=None)

Special function to assemble the GTOPO30 elevation model (DEM) from UCAR.edu.

Note

GTOPO30 has a worlwide coverage but only the tile ‘W020N90’ is downloaded as only this one covers the area where the target SRS is valid.

Parameters:
  • path (str) – Path where to generate the file

  • name (str) – name (code) of the dataset to assemble

  • replace (bool) – If True, an existing file is overwritten. If False, an error is raises if the file already exists.

  • provider (dict) – Optionally accepted for compatiblity with the general asseble funtion call. Is not evaluated.

Returns:

Success (True) of Failure (False)

Return type:

bool

austaltools._datasets.assebmle_aw3d30(path: str, name='SRTM', replace=False, args=None)

Special function to assemble the ALOS Global Digital Surface Model “ALOS World 3D - 30m (AW3D30)” JAXA.

Note

To run this funtion successfully, the user must have an active user account can be obtained at the Earth Observation Research Center (EORC) website of the Japan Aerospace Exploration Agency (JAXA): <https://www.eorc.jaxa.jp/ALOS/en/dataset/aw3d30/registration.htm>

Parameters:
  • path (str) – Path where to generate the file

  • name (str) – name (code) of the dataset to assemble

  • replace (bool) – If True, an existing file is overwritten. If False, an error is raises if the file already exists.

  • args (dict) – Optionally accepted for compatiblity with the general asseble funtion call. Is not evaluated.

Returns:

Success (True) of Failure (False)

Return type:

bool

austaltools._datasets.assebmle_srtm(path: str, name='SRTM', replace=False, args=None)

Special function to assemble the SRTM V3 digital elevation model (DEM) from USGS.gov.

Note

To run this funtion successfully, the user must have an active EarthData Login account that can be obtained at the NASA earthdata website: <https://urs.earthdata.nasa.gov/users/new>

Parameters:
  • path (str) – Path where to generate the file

  • name (str) – name (code) of the dataset to assemble

  • replace (bool) – If True, an existing file is overwritten. If False, an error is raises if the file already exists.

  • args (dict) – Optionally accepted for compatiblity with the general asseble funtion call. Is not evaluated.

Returns:

Success (True) of Failure (False)

Return type:

bool

austaltools._datasets.assemble_DGM_SH(path, name, replace, args: dict)

Special function to assemble a digital elevation model (DEM) of the German state Schlewig-Holstein (SH) It is designed to scrape their “Downloadclient” website.

Parameters:
  • path (str) – Path where to generate the file

  • name (str) – name (code) of the dataset to assemble

  • replace (bool) – If True, an existing file is overwritten. If False, an error is raises if the file already exists.

  • args (dict) – Optionally accepted for compatiblity with the general asseble funtion call. Is not evaluated.

Returns:

Success (True) of Failure (False)

Return type:

bool

austaltools._datasets.assemble_DGM_composit(path: str, name: str, replace: bool = False, args=None)

Special function to assemble a digital elevation model (DEM) that is a composit of other datasets or files or a mixture thereof.

Note

If a composit includes other datasets, they must be assembled before calling this function.

Parameters:
  • path (str) – Path where to generate the file

  • name (str) – name (code) of the dataset to assemble

  • replace (bool) – If True, an existing file is overwritten. If False, an error is raises if the file already exists.

  • args (dict|None) – Dictionary conating the command arguments.

Returns:

Success (True) of Failure (False)

Return type:

bool

austaltools._datasets.assemble_DGMxx(path: str, name: str, replace: bool, args: dict)

Versatile function to assemble a dataset containing a digital elevation model (DEM), German: “digitales Geländemodell (DGM) of user selectable resolution.

Parameters:
  • path (str) – Path and filename of the file to generate

  • name (str) – name (code) of the dataset to assemble

  • replace (bool) – If True, an existing file is overwritten. If False, an error is raises if the file already exists.

  • args (dict) –

    The arguments neede to preform the asembly. for more details see configure-austaltools.

    • ”host”: (str) Hostname and protocol from where to download data. Supported protocols are "http://...", "https://...", and "file:///...".

    • ”path”: (str) Storage path on the host, defaults to “”.

    • ”cert-check”: (str, optional) Wether to check the server certificates of host or not. Disables verification by setting this value to “no” or “false”. Defaults to “true”.

    • ”filelist”: (str or list, optional) list of filenames to download or “generate” or Path or URL to file that contains this list.

    • ”localstore”: (str, optional) path to local storage of the downloaded files. Locally saved files have priority over downloaded files. Successfully downloaded files are copied to this location.

    • ”jsonpath”: (str, optional) Pattern how to extract file list from filelist if it points to a json file See jsonpath()

    • ”xmlpath”: (str, optional) Pattern how to extract file list from filelist if it points to an xml file See xmlpath()

    • ”links”: (str, optional) Regular expression to extract file list from filelist if it points to a htmls file, by filtering all links in filelist.

    • ”missing”: (str, optional) if ‘ok’, ‘ignore’, an empty list is returned, if the URL download fails with error 404 (not found)

    • ”unpack”: (str, optional) the description, what to unpack (see unpack string)

    • ”CRS”: (str, optional) the referecnce system of the input data (in the form “EPSG:xxxx”)

    • ”utm_remove_zone”: (str, optional) If ‘True’, ‘true’, ‘yes’, True is passed to _fetch_dgm_od._ass_reduce_tile()

Returns:

Success (True) of Failure (False)

Return type:

bool

austaltools._datasets.assemble_DWD(path: str, name='DWD', years: list = None, replace: bool = False, args=None)

Downloads, extracts, and merges DWD dataset observations for specified years into single NetCDF files per year.

Parameters:
  • path (str) – The path where the final merged NetCDF files will be stored.

  • name (str) – name (code) of the dataset to assemble

  • years (list) – A list of years (integer) for which DWD data should be downloaded and processed.

  • replace (bool) – If True, an existing file is overwritten. If False, an error is raises if the file already exists.

  • args (dict) – Optionally accepted for compatiblity with the general asseble funtion call. Is not evaluated.

Raises:

ValueError – If any of the years specified is outside the valid range (1940 to the current year).

  • This function assumes that a global _tools.TEMP variable is defined and points to a valid temporary directory for intermediate files.

austaltools._datasets.assemble_GLO_30(path, name='GLO_30', replace: bool = False, args=None)

Special function to assemble the GLO_30 digital elevation model (DEM) from European Copernicus service.

Note

To run this funtion successfully, the user must have an active Copernicus user account that can be obtained at the Copernicus user’s portal: <https://cdsportal.copernicus.eu/web/spdm/registeruser>

Parameters:
  • path (str) – Path where to generate the file

  • name (str) – name (code) of the dataset to assemble

  • replace (bool) – If True, an existing file is overwritten. If False, an error is raises if the file already exists.

  • args (dict) – Optionally accepted for compatiblity with the general asseble funtion call. Is not evaluated.

Returns:

Success (True) of Failure (False)

Return type:

bool

austaltools._datasets.assemble_hostrada(path: str, name='HOSTRADA', years: list = None, replace: bool = False, args=None)

Downloads, extracts, and merges DWD HOSTRADA dataset for specified years into single NetCDF files per year.

Parameters:
  • path (str) – The path where the final merged NetCDF files will be stored.

  • name (str) – name (code) of the dataset to assemble

  • years (list) – A list of years (integer) for which DWD data should be downloaded and processed.

  • replace (bool) – If True, an existing file is overwritten. If False, an error is raises if the file already exists.

  • args (dict) – Optionally accepted for compatiblity with the general asseble funtion call. Is not evaluated.

Raises:

ValueError – If any of the years specified is outside the valid range (1995 to the current year).

  • This function assumes that a global _tools.TEMP variable is defined and points to a valid temporary directory for intermediate files.

austaltools._datasets.assemble_rea(path: str, name: str, years: str | int | None = None, replace: bool = False, args: dict | None = None)

Downloads, extracts, and merges ERA5 or CERRA datasets for specified years into single NetCDF files per year.

This function orchestrates the retrieval and processing of reanalysis / forecast datasets for a list of years. For each year, it fetches data for multiple lead times, extracts a specific region from the datasets, and then merges the forecast data into a single NetCDF file per year. This functions assumes a temporary directory is defined for intermediate data storage.

Parameters:
  • path (str) – The path where the final merged NetCDF files will be stored.

  • name (str) – name (code) of the dataset to assemble. Supports “ERA5” or “CERRA”.

  • years (list) – A list of years (integer) for which CERRA data should be downloaded and processed. The years should fall within the range of 1940 to the current year.

  • replace (bool) – If True, an existing file is overwritten. If False, an error is raised if the file already exists.

  • args (dict) –

    The arguments neede to preform the asembly. for more details see configure-austaltools.

    • ”parallel_queries”: (int, optional) number of parallel queries that are accepted by CDS for this dataset. Defaults to CDS_PARALLEL_QUERIES.

    • ”chunks”: (str | bool, optional) Whether to query the data in one piece (1 or False), in montly chunks (12 or `True) or in 2, 3, or 4 multi-month chunks. Single chunks result in a few large files to transfer, which are ususally ranked lower in the request queue and possibly exceed the maximum query size. Small chunks create much more queries but which are usually answered faster. Defaults to const:CDSAPI_CHUNKS.

    • ”window”: (str, optional) How to subset the original dataset: Known keywords are: - “area”: use the subestting funtionality of the CDS provided for several datasets, for example ERA5. - “index”: subset the data after donload, based on the grid index. - “dimension”: subset the data after donload, based on the values of the dimension variables.

      Requires “nwse” or “xxyy”.

    • ”nwse”: (list[float, float, float, float], optional) values determining the window boundaries in the order. north (maximum y coordinate or latitude), west (minimum x coordinate or longitude), south (minimum y coordinate or latitude), and east (maximum x coordinate or longitude).

    • ”xxyy”: (list[float, float, float, float], optional) values determining the window boundaries in the order. minimum x coordinate or longitude, maximum x coordinate or longitude, minimum y coordinate or latitude, and maximum y coordinate or latitude.

Raises:

ValueError – If any of the years specified is outside the valid range (1940 to the current year).

Example:
>>> # To process CERRA data for the years 2015 to 2017
>>> assemble_rea('/path/to/final/storage', 'CERRA',
>>>     years=[2015, 2016])
Note:

  • A temporary directory for storing intermediate data files is required. This directory is assumed to be configured before the function call.

  • After processing, intermediate data files are removed to free up space.

  • This function assumes that a global _tools.TEMP variable is defined and points to a valid temporary directory for intermediate files.

austaltools._datasets.cds_get_cerra_year(year: int, chunks: int | bool = True, maxparallel: int = None, area: list | None = None, subset: list | None = None)

Downloads and processes a year’s worth of CERRA dataset as GRIB files, then converts them to NetCDF format for easier use.

This function takes a tuple containing the year (y) and lead time (lt) for the forecast data. It builds the filename for the GRIB file from these parameters and checks if it exists locally. If not, it uses the CDS API to retrieve the data for all specified variables over the entire year, saving it as a GRIB file. After downloading, the function processes the GRIB file, converting it to a NetCDF file for more convenient analysis and removes the original GRIB file to conserve space.

Requires the cdsapi package, as well as an active Copernicus account for data retrieval.

Parameters:
  • year (int) – The year of the dataset to retrieve.

  • chunks (int | bool) – Whether to retrieve monthly chunks or yearly files. If True or 12, monthly chunks are downloaded. If False or 1, the year is downloaded in one piece (which exceeds current limits as of Apr 2025) If 2, 3, 4, or 6, six multi-monthly chunks are downloaded (which can be faster, depending on the qeue length)

  • maxparallel (int | None) – number of parallel queries that are submitted to the CDS API. Or None for the default value.

  • area (None) – accepted for consistency with other cds_get_... functions.

  • subset (list[int, int, int, int] | None) – subset to extract after downloading data from the CDS database in the terms of grid cell indices in the order “xmin, xmax, ymax, ymin” or None for the default value.

Returns:

None. The function’s primary purpose is file I/O (downloading and converting data). It does not return a value but will print status messages regarding its progress.

Raises:
  • RuntimeError – If nothing was downloaded.

  • ValueError – If chunks is neither divisor of 12, nor True or False.

Example:
>>> # To download and process the CERRA data for the year 2023
>>> cds_get_cerra_year(2023)
Note:

  • The ‘cdsapi’ Client is used for data retrieval, requiring a valid CDS API key set up as per the CDS API’s documentation.

  • This function assumes cerraname returns a base filename to which .grib or .nc is appended for output files.

austaltools._datasets.cds_get_era5_year(year: int, chunks: int | bool = True, maxparallel: int = None, area: list | None = None, subset: list | None = None)

Downloads ERA5 reanalysis data for a specific year and saves it as a NetCDF file.

The function calls the Climate Data Store (CDS) API to retrieve a specific set of meteorological variables for the entire year specified by the user. It requests data in NetCDF format, covering a predefined geographic extent focusing on Alaska and Europe. This function is specifically designed to automate the retrieval process for ERA5 weather variables, saving the data in a structured format that’s easier to work with for further analysis.

Parameters:
  • year (int) – The year for which to download the data (integer).

  • chunks (int | bool) – Whether to retrieve omnthly chunks or yearly files. If True or 12, monthly chunks is downloaded. If False or 1, the year is downloaded in one piece (which exceeds current limits as of Apr 2025) If 2, 3, 4, or 6, six multi-monthly chunks are downloaded (wich can be faster, depending on the qeue length)

  • maxparallel (int | None) – number of parallel queries that are submitted to the CDS API. Or None for the default value.

  • area (list[float, float, float, float] | None) – Area to extract from the CDS database as a list of “North, West, South, East” (Minimum latitude, maximum latitude, minimum longitude, maximum longitude) or None for the default value.

  • subset (None) – Accepted for consistency with other cds_get_... functions.

Returns:

None. The function saves a NetCDF file to the specified path but does not return any value.

Example:
>>> # To download ERA5 data for the year 2020 and
>>> # save it to the specified directory
>>> cds_get_era5_year(2020)
Note:
  • The function crafts a filename based on the year, prefixing it with era5_ak_eu_ to denote the region and type of data retrieved. Ensure that the specified directory exists and is writable.

  • The library cdsapi must be installed and a valid CDS API key must be configured as per the cdsapi package documentation.

austaltools._datasets.cds_get_order_list(args_list: list, maxparallel: int | None = None) list[str]

Execute a list of orders by submitting queries to CDS either sequentially or in parallel, depending on the constant RUNPARRALEL.

Parameters:
Returns:

list of downloaded files

Return type:

list[str]

austaltools._datasets.cds_getorder(order_args: dict[str, str | dict]) str

Execute a CDS order (helper function to execute orders in parallel)

Parameters:

order_args (dict) – order data dictionary

Returns:

filename

Return type:

str

The order data dictionary must contain the keys:
  • dataset: Name of the cds dataset to get data from.

  • request: Body of the request, as described in the Climate Data Store API HowTo

  • target: Name of the file to produce

austaltools._datasets.cds_merge_zipped(source, destination, compression='zlib')

Merge the files in a zipped archive downloaded from cds.climate.eu into one nc file.

Parameters:
  • source (str) – path of the archive file to read

  • destination (str) – path of the destination file to create

  • compression (str | None) – (optional) compression type, defaults to COMPRESS_NETCDF

austaltools._datasets.cds_processorder(order_args: dict[str, str | dict], compression: str | None = 'zlib') str

Preprocess a file downloaded by austaltools:_datasets.cds_getorder() by converting a dowanload file that is a zip archive containing netCDF files (new since 2024) into one plain netCDF file and / or by optionally subestting the data.

Parameters:

order_args – order data dictionary

Returns:

filename of the produced file

Return type:

str

austaltools._datasets.cds_replace_valid_time(compression: str | None = 'zlib')

Replaces the variable valid_time in ECMWF products (measured in seconds since 1970-01-01) by the more widely used variable time (measured in hours since 1900-01-01).

Parameters:

compression (str | None) – compression method for netCDF files produced. Ususally ‘zlib’. Default to COMPRESS_NETCDF.

Returns:

replace and convert for use with function from the _netcdf module.

Return type:

dict, dict

austaltools._datasets.dataset_available(name)

Return if dataset is available

Parameters:

name (str) – dataset id of dataset to be checked

Returns:

True if dataset is available, False otherwise

Return type:

bool

austaltools._datasets.dataset_get(name)

Yield the dataset with the given ID

Parameters:

name (str) – dataset ID

Returns:

the requested dataset object

Return type:

Dataset

Raises:

ValueError – if the dataset does not exist

austaltools._datasets.dataset_list()

Get list of datasets

Returns:

the requested dataset object

Return type:

dict[dict]

Raises:

ValueError – if the dataset does not exist

austaltools._datasets.dgm1_sh_getfid(args)

get individual file for DGM1-SH

Parameters:

args (tupe[int, int, int, dict]) – download number, total no of downloads, file-id, args

Returns:

names of extracted files

Return type:

list[str]

austaltools._datasets.expand_filelist_string(string, base_url, verify, xmlp, jsonp, linkp)

Extract the file list from a text (html, xml, meta4, json, geojson)

Parameters:
  • string (str) – string describing the file to parse. format: <filename>::<file-format>

  • base_url (str) – the url including the directory where the file is to be found

  • verify (bool) – verify (ot not) theserver certificates when downloading the file via https

  • xmlp (str) – xmp path to the file list items (when parsing xml, meta4)

  • jsonp (str) – jsonpath to the file list items (when parsing json)

  • linkp (str) – regex to match file list items

Returns:

file list

Return type:

list[str]

austaltools._datasets.find_terrain_data()

Searches all known storage locations for the known terrain datasets and yields a list of the datasets available locally.

Returns:

dataset IDs of the locally available datasets

Return type:

list[str]

austaltools._datasets.find_weather_data()

Searches all known storage locations for the known terrain datasets and yields a list of the datasets available locally.

Returns:

dataset IDs of the locally available datasets

Return type:

list[str]

austaltools._datasets.get_dataset_crs(filename)

Query the projection of a geo data file.

Parameters:

filename (str) – name of the file (optionally with leading path)

Returns:

Projection of the geo data file ind the form “EPSG:xxxx”

Return type:

str

austaltools._datasets.get_dataset_driver(filename)

Query the driver (i.e. fiel format) of a geo data file

Parameters:

filename (str) – name of the file (optionally with leading path)

Returns:

Projection of the geo data file ind the form “EPSG:xxxx”

Return type:

str

austaltools._datasets.get_dataset_nodata(filename)

Query the NODATA value of a geo data file

Parameters:

filename (str) – name of the file (optionally with leading path)

Returns:

nodata value

Return type:

float

austaltools._datasets.merge_tiles(target: str, tile_files: list[str], ullr: tuple[float, float, float, float] = None)

merge the GeoTiff Files from all tiles into one file

Parameters:
  • target (str) – name, optionally including path) of the file to generate

  • tile_files (list[str]) – Input files to merge

  • ullr (tuple[float,float,float,float]) – upper left and lower right corner of output area if missing, the full area covered by the input tiles is produced

Raises:

Exception – if gdal_merge aborts with error

austaltools._datasets.name_yearly(name, year)

Return the year-specific code for a yearly dataset

austaltools._datasets.process_input(args)

Worker funtion to process a downloaded file into one or more data (tile) file(s) of the desired resolution and projection

Parameters:

args (tuple[str, str, bool, dict]) –

tuple containg the arguments:

  • inp: location of the input file. Either file and path or URL

  • base_url: base url to prepend to inp, omitted if inp is a URL

  • verify: enable (True) or disable (False) server certificate check

  • provider: dict containing the processing arguments
    • provider[‘localstore’]: (str, optional) path where local copies of the download files are stored. Files that exist in this directory are copied from there and not downloaded. Successfully downloaded files are copied to this location.

    • provider[‘missing’]: (str, optional) if ‘ok’, ‘ignore’, an empty list is returned, if the URL download fails with error 404 (not found)

    • provider[“unpack”]: (str, optional) the description, what to unpack.

    • provider[“CRS”]: (str, optional) the referecnce system of the input data (in the form “EPSG:xxxx”)

    • provider[“utm_remove_zone”]: (str, optional) If ‘True’, ‘true’, ‘yes’, True is passed to _ass_reduce()

Returns:

list of the generated files

Return type:

list[str]

austaltools._datasets.provide_stationlist(source: str = None, fmt: str = None, out: str = None)

Extract the stationlist from a data source

Parameters:
  • source (str) – code of the source dataset. Currently, only “DWD” is implemented

  • fmt (str|None) – file format or generate (csv or json)

  • out (str|None) – The path where the final merged file will be stored. If None, the stationlist will be sent to stdout.

austaltools._datasets.provide_terrain(source: str, path: str = None, force: bool = False, method: str = 'download')

Funciton that makes a terrain dataset (digital elevation model, DEM) locally available, using the chosen method.

Parameters:
  • source (str) – ID of the dataset to make available

  • path (str | None, optional) – Path to where to write the dataset files. If None, the lowest-priority (i.e. most system-wide) writable location of the standard stroage locations in _tools.STORAGE_LOCATIONS is selected. Defaults to None.

  • force (bool, options) – whether to overwrite a dataset that is already avialable. Defaults to False.

  • method (str) –

    The method how to get the dataset. Defaults to ‘download’.

    ’download’:

    the ready-assembled dataset ist downloadad form a location specified in the dataset definition.

    ’assemble’:

    the dataset is created from data that are acquired (if possible) from an original supplier.

Raises:

ValueError – if method is not one of the allowed values.

austaltools._datasets.provide_weather(source: str, path: str | None = None, years: list | None = None, force: bool = False, method: str = 'download')

Manages the downloading and organizing of weather data from specified sources for given years into a target directory.

This function serves as a high-level interface for downloading weather datasets (for example, ERA5 or CERRA) for a specified set of years and organizing them into a specified directory. The function currently supports the ‘download’ method with potential for future expansion.

Parameters:
  • source (str) – The name of the weather dataset source. Currently supports “ERA5” or “CERRA”.

  • path (str, optional) – Optional; the file system path where the downloaded data will be saved. If not specified, the function attempts to find a writable storage location using find_writeable_storage.

  • years (list, optional) – A list of integer years for which to download the data. If not specified, no year-specific data fetching is performed, which may depend on the implementation details of the dataset handling functions.

  • force (bool, options) – Wheter to overwrite a dataset that is already avialable. Defaults to False.

  • method (str, optional) – The method to use for obtaining the data. Valid values are ‘dowload’ and ‘assemble’.

Returns:

A boolean value indicating the success (True) or failure (False) of the data downloading and organization process.

Example:
>>> # To download ERA5 data for the years 2020 and 2021
>>> # into the default storage location
>>> successful = provide_weather("ERA5", years=[2020, 2021])
True
Note:

  • This function logs its operations, including informational messages on progress and errors encountered.

  • The actual implementation for finding writable storage or the setup for the logger is not defined in this function, and should be provided in the surrounding context.

Raises:

  • This function may raise exceptions internally but catches them to return a boolean success status. Detailed error information is logged.

austaltools._datasets.reduce_tile(tf1, out_res, overwrite=True)

Resamples a tile (or any file that can be autodetected by gdal) to a differen (only lower makse sense) resolution and saves ist as GeoTiff

Parameters:
  • tf1 (str) – name (and optionally path) of the input file

  • out_res (float) – output resolution (i.e. pixel width) in km

  • overwrite (bool) – overwrite existing output file

Returns:

name (and path if supplied in tf1) of the output file or empty stringif no file is written

Return type:

str

austaltools._datasets.show_notice(storage_path, source)

Shows a notice to the user when a dataset is accessed, if this is required by the original supplier of the dataset.

Parameters:
  • storage_path (str) – path to the dataset files

  • source (str) – dataset ID

austaltools._datasets.stationlist_DWD(path: str = None, fmt: str = None, h: float | None = None)

Downloads, extracts, and merges DWD station lists.

Parameters:
  • path (str) – The path where the final merged file will be stored.

  • fmt (str) – file format or generate (csv or json)

  • h (float) – (optional) height of wind measurements in m, mut be gerater than 1. Defaults to standard heigth (10)

  • This function assumes that a global _tools.TEMP variable is defined and points to a valid temporary directory for intermediate files.

austaltools._datasets.unpack_file(dl_file, unpack)

Unpack files from an archive

Parameters:
  • dl_file (str) – filename, otionally incl. path, of the archive (downloadad file)

  • unpack (str) – string describing what to unpack

Returns:

names of the files extracted

Return type:

list[str]

Raises:

ValeError – if unpack string is invalid

austaltools._datasets.update_available()

update availability of datasets stored in conf by scanning storrage locations

austaltools._datasets.xyz2csv(inputfile, output, utm_remove_zone=False)

Clean the xyz flies downloaded in a way that gdal accepts them as csv

Parameters:
  • inputfile (str) – input file

  • output (str) – output file

  • utm_remove_zone (bool) – Some providers prefix UTM easting with the zone number, which results in easting values exceeding 1000km. Remove the leading digits to keep easting in the allowed range 0m < easting < 1000000 m. defaults to False.

Returns:

True if successful, False otherwise

Return type:

bool

austaltools._datasets.xyz2tif(inputfile, srcsrs, utm_remove_zone=False)

convert xyz file (via csv) to GeoTiff

Parameters:
  • inputfile (str) – input file

  • srcsrs (str) – SRS of the input file (as “EPSG:xxxxx”)

  • utm_remove_zone (bool) – Some providers prefix UTM easting with the zone numer, which results in easting values exceeding 1000km. Remove the leading digits to keep easting in the allowed range 0m < easting < 1000000 m. defaults to False.

Returns:

output file name (GeoTiff)

Return type:

str

austaltools._datasets.CDSAPI_CHUNKS = True

Copernicus per-request limit does not permit download of yearly files and requires splitting up donwloads. For possible values see austaltools._dataset.cds_get_cerra_year()

austaltools._datasets.CDSAPI_LIMIT_PARALLEL = 2

Copernicus per-user limit for parallel queries

austaltools._datasets.COMPRESS_NETCDF = 'zlib'

Standard compression method of netCDF files

austaltools._datasets.DATASETS: list

All known datasets as DataSet instances. Filled on demand.

austaltools._datasets.DEM_CRS = 'EPSG:5677'

standard DEM reference system

austaltools._datasets.DEM_FMT = '%s.elevation.nc'

terrain database file name template

austaltools._datasets.DEM_WINDOW = (47, 54, 5, 16)

standard lat/lon window for worldwide terrain datasets latmin, latmax, lonmin, lonmax

austaltools._datasets.OBS_FMT = '%s.obs.zip'

weather observation database file name template

austaltools._datasets.PROCS = None

Number of parallel processes to run downlading data or None (then the number of processor cores in the system is used).

austaltools._datasets.RUNPARALLEL = True

Use parallel processing (defaults to False on Windows Systems)

austaltools._datasets.SOURCES_TERRAIN = ['DGM25-RP', 'DGM10-DE', 'DGM25-DE', 'DGM10-BB', 'DGM10-BE', 'DGM10-BW', 'DGM10-BY', 'DGM10-HB', 'DGM10-HE', 'DGM10-HH', 'DGM10-MV', 'DGM10-NI', 'DGM10-NW', 'DGM10-RP', 'DGM10-SH', 'DGM10-SL', 'DGM10-SN', 'DGM10-ST', 'DGM10-TH', 'GLO-30', 'GTOPO30', 'SRTM', 'AW3D30']

list of known terrain data sources

austaltools._datasets.SOURCES_WEATHER = ['ERA5', 'CERRA', 'HOSTRADA', 'DWD']

list of known weather data sources

austaltools._datasets.WEA_FMT = '%s.ak-input.nc'

weather model database file name template

austaltools._datasets.WEA_WINDOW = (33, 71, -12, 36)

standard lat/lon window for worldwide weather datasets latmin, latmax, lonmin, lonmax

_dispersion

This module provides funtions to determine the stability classes as used by atmospheric dispersion model by varius methods.

class austaltools._dispersion.StabiltyClass(bounds: list | tuple | None = None, centers: list | tuple | None = None, tabbed_values_inverted: bool = False, reverse_index: bool = True, names: list[str] | tuple[str] | None = None)

Class that holds information about a set of stabilty classes.

Parameters:
  • bounds (list[tuple[list]]) – for each stability class, a 2-element list or tuple must be given. The first list must contain the z0 vlaues for the roughness lenght classes in ascending order, the second list must be of the same length and contain the boundary values of the Obukhov lenght separating the 1st and 2nd class, the 2nd and the 3rd, … . Mutually exclusive with centers.

  • centers (list[tuple[list]]) – for each stability class, a 2-element list or tuple must be given. The first list must contain the z0 vlaues for the roughness lenght classes in ascending order, the second list must be of the same length and contain the center values of the Obukhov lenght for 1st, 2nd, 3rd, … class Mutually exclusive with bounds.

  • tabbed_values_inverted (bool) – False if the bounds or center values should be taken as they are. True if values shoud be inverted i.e. \(1/x\). Defaults to False.

  • reverse_index – False if the numric class index is acsending. True if it is decending. Defaults to False.

  • names (list[str]) – Names of the stability classes. Must be same lenght as centers or one more element as bounds

get_bound(num: int, z0: float, inverted: bool = False) float

get the upper boundary value of Obukhov lentgh \(L\) for the class with index num for roughness length z0.

Parameters:
  • num (int) – numeric class index

  • z0 (float) – roughness length in m

  • inverted (bool (optional)) – True if \(1/L\) should be returned instead of \(L\)

Returns:

Obukhov length \(L\) in m

Return type:

float

get_center(num: int, z0: float, inverted: bool = False) float

get the center value of Obukhov lentgh \(L\) for the class with index num for roughness length z0.

Parameters:
  • num (int) – numeric class index

  • z0 (float) – roughness length in m

  • inverted (bool (optional)) – True if \(1/L\) should be returned instead of \(L\)

Returns:

Obukhov length \(L\) in m

Return type:

float

get_index(z0: float, lob: float, inverted: bool = False) int

get the numeric class index for roughness length z0 and Obukhov lentgh \(L\).

Parameters:
  • z0 (float) – roughness length in m

  • lob – Obukhov lentgh

  • inverted (bool (optional)) – True if lob is \(1/L\) instead of \(L\)

Returns:

Numeric class index

Return type:

int

get_name(z0: float, index: int, inverted: bool = False) str

get the class name for roughness length z0 and Obukhov lentgh \(L\).

Parameters:
  • z0 (float) – roughness length in m

  • index – class index

  • inverted (bool (optional)) – True if lob is \(1/L\) instead of \(L\)

Returns:

class name

Return type:

str

index(name: str) int

get the numeric class index for class name

Parameters:

name (str) – class name

Returns:

numeric class index

Return type:

int

name(num: int) str

get the class name for numeric class index

Parameters:

num – numeric class index

Returns:

class name

Return type:

str

count = 0
names = None
reverse_index = False
austaltools._dispersion.h_eff(has: float | Series, z0s: float | Series) list[float]
Calculate effective anemometer heights for all nine

z0 class values used by AUSTAL [AST31] from the actual height of the wind measurement

Parameters:
  • has (pandas.Series of float) – actual height of the wind measurement

  • z0s (pandas.Series of float) – roughness lenght at the position of the wind measurement

Returns:

height for nine roughness lenght at the model position ordered from the smallest to the lagrest roughness length

Return type:

list[float]

Note:

The effective roughness height is the height where the same wind speed would be measured considering the roughness at the model site as it is measured by a nearby the anemometer that is mounted at height has on a site where the roughness is z0s

austaltools._dispersion.klug_manier_scheme(*args, **kwargs)

shorthand for the currently valid version of the Klug/Manier scheme austaltools._dispersion.klug_manier_scheme_2017()

austaltools._dispersion.klug_manier_scheme_1992(time: Timestamp | DatetimeIndex | datetime64 | str | list[str], ff: float | list[float] | Series, tcc: float | list[float] | Series, lat: float, lon: float, cty: str | list[float] | Series | None = None) int | Series

Calulate stability class after Klug/Manier accroding to according to VDI 3782 Part 1 (issued 1992)

Category

Atmospheric stability

Index

I

very stable

1

II

stable

2

III/1

neutral/stable

3

III/2

neutral/unstable

4

IV

unstable

5

V

very unstable

6

Parameters:
  • time – (required, time-like) An arbitrary time during the day of year for which surise and sunset should be calculated. May be supplied as any form accepted by pandas.to_datetime(), e.g. timestamp (“2000-12-14 18:00:00”) or datetime64. If timezone is not supplied, UTC is assumed. If timezone is supplied, time is converted to CET.

  • ff – (required, float) wind speed in 10m height.

  • tcc – (required, float) total cloud cover as fraction of 1 (equals value in octa divided by 8).

  • lat – (required, float) latitude in degrees. Southern latitudes must be nagtive.

  • lon – (required, float) longitude in degrees. Eastern longitudes are positive, western longitudes are negative.

  • cty – (optional, str) cloud type of lowest cloud layer. When it is “CI”, “CS”, or “CC”, the condition “cloud coverage exclusively consits of high clouds (Cirrus)” is met. If absent, “CU” is assumed.

Returns:

class value (numeric index)

Return type:

int if time is a scalar, pandas.Series(int64) if time is array-like.

austaltools._dispersion.klug_manier_scheme_2017(time: DatetimeIndex | Timestamp | datetime64 | str | list[str], ff: float | list[float] | Series, tcc: float | list[float] | Series, lat: float, lon: float, ele: float, cty: float | list[float] | Series | None = None, cbh: float | list[float] | Series | None = None, _cloudout=False)

Calulate stability class after Klug/Manier according to according to VDI 3782 Part 6 (issued Apr 2017)

Category

Atmospheric stability

Index

I

very stable

1

II

stable

2

III/1

neutral/stable

3

III/2

neutral/unstable

4

IV

unstable

5

V

very unstable

6

The norm states:

Strictly speaking, the above correction conditions apply only to locations in Central Europe with a pronounced season- al climate and sunrise and sunset times definable over the whole year, which in particular during the winter months always exhibit a time difference exceeding six hours. These conditions are met in Germany. For other countries, CET should be replaced where relevant by the corresponding zone- time. In climatic zones with diurnal climate or other calendar classifications of astronomical seasons, the correction condi- tions are not directly applicable in the above form. Adaptation of subsections a to d for other global climatic zones does not form a part of this standard.

Parameters:
  • time – (required, time-like) An arbitrary time during the day of year for which surise and sunset shout be calculated. May be supplied as any form accepted by pandas.to_datetime(), e.g. timestamp (“2000-12-14 18:00:00”) or datetime64. If timezone is not supplied, UTC is assumed. If timezone is supplied, time is converted to CET.

  • ff

    (required, float) wind speed in 10m height. VDI 3782 Part 6 states:

    The standard conditions for the wind speed (υa) are the standard measurement height of 10 m above ground (VDI 3786 Part 2; VDI 3783 Part 8) in combination with a roughness length of z0 = 0,1 m. Other measurement heights are suitable if they equal at least twelve times the roughness length and are at least 4 m above ground level. If the wind speed is available for other than the above standard conditions, i.e. for another suitable measurement height or a different roughness length, a conversion needs to be carried out.

  • tcc – (required, float) total cloud cover as fraction of 1 (equals value in octa divided by 8).

  • lat – (required, float) latitude in degrees. Southern latitudes must be nagtive.

  • lon – (required, float) longitude in degrees. Eastern longitudes are positive, western longitudes are negative.

  • ele – (required, float) surface elvation above sea level in m.

  • cbh – (optional, float) cloud base height in m.

  • cty – (optional, str) cloud type of lowest cloud layer. When it is “CI”, “CS”, or “CC”, the condition “cloud coverage exclusively consits of high clouds (Cirrus)” is met. If absent, “CU” is assumed.

  • _cloudout – (optional, boolean) for verification only.

Returns:

class value (numeric index)

Return type:

int if time is a scalar, pandas.Series(int64) if time is array-like.

austaltools._dispersion.obukhov_length(ust: float | Series, rho: float | Series, Tv: float | Series, H: float | Series, E: float | Series, Kelvin: bool | None = None) float | ndarray

Returns the Obuhkov lenght [GOL1972] from surface values of air density, virtual temperature, latent and sensible heat-flux density.

Parameters:
  • ust (pandas.Series or float) – friction velocity in m/s.

  • rho (pandas.Series or float) – density of air kg/m^3.

  • Tv (pandas.Series or float) – virtual temperature in K or C, depending on Kelvin.

  • H (pandas.Series or float) – surface sensible heat flux density in W/m^2.

  • E (pandas.Series or float) – surface latent heat flux density in W/m^2.

  • Kelvin – (optional) If False, all temperatures are assumed to be Kelvin. If False, all temperatures are assumed to be Celsius. If missing of None, unit temperatures are autodetected. Defaults to None.

austaltools._dispersion.pasquill_taylor_scheme(time: DatetimeIndex | Timestamp | datetime64 | str | list[str], ff: float | list[float] | Series, tcc: float | list[float] | Series, lat: float, lon: float, ceil: float | list[float] | Series)

Calulate stability class after Pasquill and Turner [EPA2000]

Category

Atmospheric stability

Index

I

very stable

1

II

stable

2

III/1

neutral/stable

3

III/2

neutral/unstable

4

IV

unstable

5

V

very unstable

6

The norm states:

Strictly speaking, the above correction conditions apply only to locations in Central Europe with a pronounced seasonal climate and sunrise and sunset times definable over the whole year, which in particular during the winter months always exhibit a time difference exceeding six hours. These conditions are met in Germany. For other countries, CET should be replaced where relevant by the corresponding zone- time. In climatic zones with diurnal climate or other calendar classifications of astronomical seasons, the correction conditions are not directly applicable in the above form. Adaptation of subsections a to d for other global climatic zones does not form a part of this standard.

Parameters:
  • time – (required, time-like) An arbitrary time during the day of year for which surise and sunset should be calculated. May be supplied as any form accepted by pandas.to_datetime(), e.g. timestamp (“2000-12-14 18:00:00”) or datetime64. If timezone is not supplied, UTC is assumed. If timezone is supplied, time is converted to CET.

  • ff – (required, float) wind speed in 10m height.

  • tcc – (required, float) total cloud cover as fraction of 1 (equals value in octa divided by 8).

  • lat – (required, float) latitude in degrees. Southern latitudes must be nagtive.

  • lon – (required, float) longitude in degrees. Eastern longitudes are positive, western longitudes are negative.

  • ceil – (required, float) cloud base height in m.

Returns:

class value (numeric index)

Return type:

int if time is a scalar, pandas.Series(int64) if time is array-like.

austaltools._dispersion.stabilty_class(classifyer: str, time: DatetimeIndex | Timestamp | datetime64 | list[str], z0: Series | float, L: Series | float) list[int]
Returns the atmospheric stability class according to

the selected classification scheme.

Parameters:
  • classifyer (str) – The classification method (‘Klug/Manier’, ‘KM2021’, ‘KM’, ‘TA Luft 2021’, ‘KM2002’, or ‘TA Luft 2002’).

  • time (pandas.DatetimeIndex or datetime64) – Date and time

  • z0 (pandas.Series or float) – Roughness length(s)

  • L – Monin-Obukhov length(s) in m :type L: pd.Series or float

Returns:

Stability class indices (1-6; 9 for missing values)

Return type:

list

Raises:
  • ValueError – If shapes of time, z0, and L are not equal.

  • ValueError – If an unknown classification method is provided.

Example:
>>> import pandas as pd
>>> time = pd.DatetimeIndex(['2024-08-02 12:00:00'])
>>> z0 = pd.Series(0.1, index=time)
>>> L = pd.Series(-100, index=time)  # Example Monin-Obukhov length
>>> result = stabilty_class('KM2021', time, z0, L)
>>> print("Stability class indices:", result)
Stability class indices: [2]
austaltools._dispersion.taylor_insolation_class(solar_altitude: float) int
austaltools._dispersion.turners_key(ff: float, NRI: int) int

Returns the P-G stability class matching a wind speed class and net radiation index [EPA2000]

Parameters:
  • ff (float) – wind speed in m/s

  • NRI (int) – net radiation index

Returns:

P-G stability class as number (1=A, 2=B,…)

Return type:

int

austaltools._dispersion.vdi_3872_6_standard_wind(va: float | ndarray, hap: float, z0p: float) float | ndarray

Returns the Calculation value of wind speed according to VDI 3782 Part 6, Annex A

The norm is based on wind speed values that are taken at the standard measurement height of 10 m above ground (VDI 3786 Part 2; VDI 3783 Part 8; [5; 6]) in combination with a roughness length of \(z0 = 0.1\) m. If the wind speed \(v_a\) is available for other than the standard conditions, a conversion needs to be carried out from the conditions (measurement height \(h_a'\), roughness lenght \(z_0'\)) at the measurement site to the standard conditions.

Parameters:
  • va – (required,float or array-like) measured wind speed (\(v_a\)) in m/s.

  • hap – (required,float) height of the wind measurement above ground (\(h_a\)) in m.

  • z0p – (required,float) roughness lenght at the measurement site (\(z_0\)) in m.

austaltools._dispersion.vdi_3872_6_sun_rise_set(time: Timestamp | DatetimeIndex | datetime64 | str | list[str], lat: float, lon: float) tuple[float, float] | tuple[Series, Series]

Sunrise and sunset calculation according to VDI 3782 Part 6, Annex A

Based on equation (B10) for the solar elevation angle \({\gamma}\) quoted in VDI 3789:

\(\sin\gamma = \sin\phi + \cos\phi \cos\delta \cos\omega_0\)

Parameters:
  • time – (required, time-like) An arbitrary time during the day of year for which surise and sunset should be calculated. May be supplied as any form accepted by pandas.to_datetime(), e.g. timestamp (“2000-12-14 18:00:00”) or datetime64. If timezone is not supplied, CET (without daylight saving) is assumed. If timezone is supplied, time is converted to CET.

  • lat – (required, float) latitude in degrees. Southern latitudes must be nagtive.

  • lon – (required, float) longitude in degrees. Eastern longitudes are positive, western longitudes are negative. Only positions iside CET timezone (-9.5 < lon < 32.0) are allowed, by definition.

Returns:

sunrise, sunset as decimal hours in the timezone supplied in parameter time.

Return type:

tuple(float) if time is a scalar, tuple(pandas.Series) if time is array-like.

austaltools._dispersion.z0_verkaik(z: float, speed: float | list | Series, gust: float | list | Series, dirct: float | list | Series, rose: bool = False) float | tuple[DataFrame, DataFrame]

Calculates an estimate for the roughness lentgh of a site from the gustiness of the wind, according to the Method by Verkaik (as used by Koßmann and Namyslo [KoNa2019].

Parameters:
  • z (float, list or pandas.Series) – height of the wind measurement in meters

  • speed

  • gust (float, list or pandas.Series)

  • dirct (float, list or pandas.Series)

  • rose (bool) – If True individual values of roughness length in m and number of intervals when the wind cam from this sector. are returned for 12 wind-direction sectors (clockwise from north). If False one mean roughness lenght value in m for the site is returned. Default is False.

Returns:

roughness lenght either as mean or as sector-wise value(s)

Return type:

float or tuple[list,list], depending on rose

austaltools._dispersion.KM2002

Klug/Manier stabilty classes. Class center values taken from TA Luft 2002 [TAL2002].

Tabelle 17: Bestimmung der Monin–Obukhov–Länge L_M

austaltools._dispersion.KM2021

Klug/Manier stabilty classes. Class center values taken from TA Luft 2021 [TAL2021].

Tabelle 17: Klassierung der Obukhov-Länge L in m

austaltools._dispersion.PG1972

Pasquill-Gifford stability classes. Class Boundaries scraped from [GOL1972] Fig 5

According to EPA (link?) class G is neglected for regulatory modeling

_fetch_dwd

Created on Thu Feb 3 19:20:42 2022

@author: clemens

class austaltools._fetch_dwd.DWDStationinfo(stationfile=None)

Class that holds information about a weather stations from a dataset.

This function retrieves metadata about a specific weather station from a dataset that is either provided or located in a default location. The dataset is expected to be a JSON file containing information about multiple weather stations, including their geographical coordinates, elevation, and names.

param stationfile:

The path to the JSON file containing station information. If None, a default path is used. Defaults to None.

type stationfile:

str | None

This class can be used as a context manager.

Example:

with DWDStationinfo() as si:
    lat, lon, ele = si.position(1234)
name(station: int) str

Retrieves the name of a specific weather station identified by the station number :param station: :type station: :return: name :rtype: str

nearest(lat: float, lon: float, radius: float | None = None) int | None

Returns the number of the nearest station in the stationinfo file.

If limit is given, a station is only returned, if one if found within the given radius around the position.

Parameters:
  • lat (float) – latitude in degrees

  • lon (float) – longitude in degrees

  • radius (float) – Max distance toe station returned in km

Returns:

station number or None if no station inside the search radius

Return type:

int | None

position(station: int) tuple[float, float, float]

Retrieves the position of a specific weather station identified by the station number

Returns latitude, longitude, elevation, and name :param station: :type station: :return: lat, lon, ele :rtype: (float, float, float)

roughness(station: int) str

Retrieves the surface roughness $z_0$ at a specific weather station identified by the station number :param station: :type station: :return: roughness length in m :rtype: float

data = Empty DataFrame Columns: [] Index: []
austaltools._fetch_dwd.build_table(dat_df_in: DataFrame, meta_df_in: DataFrame, years: list) DataFrame | None
austaltools._fetch_dwd.data_from_download(product_files: list[str], path_to_files: str, oldest: Timestamp | datetime | str | None = None) DataFrame

Build one single table of weather data from the individual downloadad files

Parameters:
  • product_files – list of extracted “produkt” files

  • path_to_files – path where the product files are stored

Returns:

weather timeseries as dataframe. The columns are named as they appear in the “produkt” files, except “MESS_DATUM” and “STATIONS_ID”. Instead, the index contains the time of the measurement as datetime64.

Return type:

pandas.DataFrame

austaltools._fetch_dwd.fetch_dirlist(url: str, pattern: str = '.*') list[str]

get directory listing from (opendata) server

Parameters:
  • url (str) – directory URL

  • pattern (str) – filter directory entries by this regex pattern

Returns:

fle names

Return type:

list

austaltools._fetch_dwd.fetch_file(group: str, station: int | str, era: str | None = None, local_path: str = '.') str

download observation file from (opendata) server

Parameters:
  • group (str) – name of parameter group, for example ff

  • station ((int, str)) – DWD station number

  • era (str) – current or historical

  • local_path (str) – where to store the downloaded file

Returns:

name of the downloaded file

Return type:

str

austaltools._fetch_dwd.fetch_station_data(station: int, store: bool = True, time_start: Timestamp | str | None = None, time_end: Timestamp | str | None = None) tuple[DataFrame, DataFrame] | tuple[str, str]

Ensure that the DWD weather station data for station number station is available at storage_path. If not, data is downloaded and stored in the storage_path.

Parameters:
  • station (int) – DWD station number

  • store – If True data are be saved to files, if False data are returned as data frames

  • time_start (pd.Timestamp | str) – start of desired time window or None for getting earliest available data

  • time_end (pd.Timestamp | str) – end of desired time window or None for getting latest available data

Returns:

data file name or DataFrame and metadata file name or DataFrame

Return type:

tuple[pd.DataFrame, pd.DataFrame] | tuple[str, str]

austaltools._fetch_dwd.fetch_stationlist(years: list[int] | int | None = None, fullyear=True) dict[str, dict]

compile the station list from (opendata) server

Parameters:
  • years (list) – list of years for wich the station should habe reported data must be continuous and ascending order

  • fullyear – If True, stations are olny listed, if they have reported data for the full period. If False, stations that have strated operation in the first or ceised operation in the last year are also listed.

Returns:

list of stations

Return type:

dict[dict]

austaltools._fetch_dwd.get_meta_value(metadata: str | DataFrame, time_begin: DatetimeIndex | Timestamp | datetime64 | str, time_end: DatetimeIndex | Timestamp | datetime64 | str, par_name: str) Any

get station metadata value for parameter par_name valid for the time period info from time_begin to time_end

Parameters:
  • metadata – filename or pandas dataframe

  • time_begin – start time as string of datetime-like

  • time_end – end time as string of datetime-like

  • par_name – string containig the parameter name

Returns:

values for parameter par_name

Return type:

pandas.Series

austaltools._fetch_dwd.meta_from_download(metadata_files: list[str], station: int, path_to_files: str) DataFrame

Build one single table of the metadata provided by the individual metadata files contained in the downloadad zip archives

Parameters:
  • metadata_files – list of extracted “Metadaten” files

  • path_to_files – path where these files are stored

Returns:

metadata table as dataframe. The columns are named as they appear in the “produkt” files, except “MESS_DATUM” and “STATIONS_ID”. Instead, the index contains the time of the measurement as datetime64.

Return type:

pandas.DataFrame

austaltools._fetch_dwd.METAFILE_DWD = 'metadata_%05i.csv'

filename pattern for cached DWD metadata

austaltools._fetch_dwd.OBSFILE_DWD = 'observations_hourly_%05i.csv'

filename pattern for cached DWD observations

austaltools._fetch_dwd.OLDEST = Timestamp('1970-01-01 00:00:00+0000', tz='UTC')

remove observations before … to avoid problems with odd observation timing in the very manual era)

austaltools._fetch_dwd.TO_COLLECT = [['air_temperature', 'TU', 'tu'], ['cloud_type', 'CS', 'cs'], ['precipitation', 'RR', 'rr'], ['pressure', 'P0', 'p0'], ['soil_temperature', 'EB', 'eb'], ['visibility', 'VV', 'vv'], ['wind', 'FF', 'ff']]

parameter groups to collect from opendata file tree

_geo

Thisn module provides geo-position related functionality.

austaltools._geo.evaluate_location_opts(args: dict)

get position from the command-line location options and if applicable the WMO station number of this position

Parameters:

args (dict) – parsed arguments

Returns:

position as lat, lon (WGS84) and rechts, hoch in Gauss-Krüger Band 3 and WMO station number of this position (0 if not applicable)

Return type:

float, float, float, float, int

austaltools._geo.gk2ll(rechts: float, hoch: float) -> (<class 'float'>, <class 'float'>)

Converts Gauss-Krüger rechts/hoch (east/north) coordinates (DHDN / 3-degree Gauss-Kruger zone 3 (E-N), https://epsg.io/5677) into Latitude/longitude (WGS84, https://epsg.io/4326) position.

Parameters:
  • rechts – “Rechtswert” (eastward coordinate) in m

  • hoch – “Hochwert” (northward coordinate) in m

Type:

float

Type:

float

Returns:

latitude in degrees, longitude in degrees, altitude in meters

Return type:

float, float, float

austaltools._geo.gk2ut(rechts: float, hoch: float) -> (<class 'float'>, <class 'float'>)

Converts Gauss-Krüger rechts/hoch (east/north) coordinates (DHDN / 3-degree Gauss-Kruger zone 3 (E-N), https://epsg.io/5677) into UTM east/north coordinates (ETRS89 / UTM zone 32N, https://epsg.io/25832).

Parameters:
  • rechts – “Rechtswert” (eastward coordinate) in m

  • hoch – “Hochwert” (northward coordinate) in m

Type:

float

Type:

float

Returns:

“easting” (eastward coordinate) in m, “northing” (northward coordinate) in m

Return type:

float, float

austaltools._geo.ll2gk(lat: float, lon: float) -> (<class 'float'>, <class 'float'>)

Converts Latitude/longitude (WGS84, https://epsg.io/4326) position into Gauss-Krüger rechts/hoch (east/north) coordinates (DHDN / 3-degree Gauss-Kruger zone 3 (E-N), https://epsg.io/5677).

Parameters:
  • lat – latitude in degrees

  • lon – longitude in degrees

Type:

float

Type:

float

Returns:

“Rechtswert” (eastward coordinate) in m, “Hochwert” (northward coordinate) in m

Return type:

float, float

austaltools._geo.ll2ut(lat: float, lon: float) -> (<class 'float'>, <class 'float'>)

Converts Latitude/longitude (WGS84, https://epsg.io/4326) position into UTM east/north coordinates (ETRS89 / UTM zone 32N, https://epsg.io/25832)

Parameters:
  • lat – latitude in degrees

  • lon – longitude in degrees

Type:

float

Type:

float

Returns:

“easting” (eastward coordinate) in m, “northing” (northward coordinate) in m

Return type:

float, float

austaltools._geo.spheric_distance(lat1, lon1, lat2, lon2)

Calculate the great circle distance between two points (specified in decimal degrees) on a spheric earth. Reference: https://stackoverflow.com/a/29546836/7657658

Parameters:
  • lat1 – Position 1 latitude in degrees

  • lon1 – Position 1 longitude in degrees

  • lat2 – Position 2 latitude in degrees

  • lon2 – Position 2 longitude in degrees

Type:

float

Type:

float

Type:

float

Type:

float

Returns:

Great circle distance in km

Return type:

float

austaltools._geo.ut2gk(east: float, north: float) -> (<class 'float'>, <class 'float'>)

Converts UTM east/north coordinates (ETRS89 / UTM zone 32N, https://epsg.io/25832) into Gauss-Krüger rechts/hoch (east/north) coordinates (DHDN / 3-degree Gauss-Kruger zone 3 (E-N), https://epsg.io/5677).

Parameters:
  • east – eastward UTM coordinate in m

  • north – northward UTM coordinate in m

Type:

float

Type:

float

Returns:

“Rechtswert” (eastward coordinate) in m, “Hochwert” (northward coordinate) in m, Altitude in m

Return type:

float, float, float

austaltools._geo.ut2ll(east: float, north: float) -> (<class 'float'>, <class 'float'>)

Converts UTM east/north coordinates (ETRS89 / UTM zone 32N, https://epsg.io/25832) into Latitude/longitude (WGS84, https://epsg.io/4326) position.

Parameters:
  • east – eastward UTM coordinate in m

  • north – northward UTM coordinate in m

Type:

float

Type:

float

Returns:

latitude in degrees, longitude in degrees, altitude in meters

Return type:

float, float, float

_plotting

austaltools._plotting.common_plot(args: dict, dat: dict, unit: str = '', topo: dict = None, dots: dict = None, buildings: list = None, mark: dict = None, scale: list = None)

Standard plot function for the package.

Parameters:
  • args (dict) – dict containing the plot configuration

  • args["colormap"] (str) – name of colormap to use Defaults to austaltools._tools.DEFAULT_COLORMAP:.

  • args['kind'] – How to display the data. Permitted values are “contour” for colour filled contour levels and “grid” for color-coded rectangular grid.

  • args['fewcols'] (bool) – if True, a colormap of at most 9 (or the numer of levels if explicitly passed by scale) discrete colors ist generated for easy print reproduction.

  • args["plot"] – Destination for the plot. If empty or None no plot is produced. If the value is a string, the plot will be saved to file with that name. If the name does have the extension .png, this extension is appendend. If the string does not contain a path, the file will besaved in the current working directory. If the string contains a path, the file will be saved in the respective location.

  • args['working_dir'] – Working directory, where the data files reside.

  • dat (dict) – dictionary of x, y, and z values to plot. ‘x’ and ‘y’ must be lists of float or 1-D ndarray. ‘z’ must be ndarray of a shape matching the lenght of x and y

  • unit (tuple or None) – physical units of the values z in dat

  • scale – range of the color scale. None means auto scaling.

  • topo (dict or string or None) – topography data as dict (same form as dat) or filename of a topography file in dmna-format or None for no topography

  • dots – data to ovelay dotted areas (e.g. to mark significance). dots must either be a dict (same form as dat) or a ndarray matching the z data in dat in shape. dat values z < 0 are not overlaid, values 0 <= z < 1 are sparesely dotted, values 1 <= z < 2 are sparesely dotted, spography data as dict (same form as dat) or filename of a topography file in dmna-format or None for no topography

  • buildings (list) – List of Building objects to be displayed. If None or list is epmty, no buildings are plotted.

  • mark (dict or pandas.Dataframe) – positions to mark. either dict containing list-like objects of x, y and optionally ‘symbol’ of the same length or a pandas data frame containing such columns. symbol are matplotlib symbol strings. If missing ‘o’ is used.

austaltools._plotting.consolidate_plotname(argument, default: str | None = None)
austaltools._plotting.plot_add_mark(ax, mark)
austaltools._plotting.plot_add_topo(ax, topo, working_dir='.')
austaltools._plotting.read_topography(topo_path)

_metadata

austaltools._metadata.get_metadata()

Get package metadata from pyproject.toml

_netcdf

Module that holds untilities for manipulating netCDF4 files

class austaltools._netcdf.VariableSkeleton(name: str, datatype: str, dimensions: tuple = (), compression: str | None = None, zlib: bool = False, complevel: int = 4, shuffle: bool = True, szip_coding: str = 'nn', szip_pixels_per_block: int = 8, blosc_shuffle: int = 1, fletcher32: bool = False, contiguous: bool = False, chunksizes=None, endian: str = 'native', least_significant_digit=None, fill_value=None, chunk_cache=None)

Class that can hold the same attributes as netCDF4.Variable except for group so it can serve as a skeleton to add to a netCDF4.Dataset

getncattr(name)

Get the value of a variable attribute

Parameters:

name (str) – name of the attribute

Return value:

value to set

Rtype value:

Any

ncattrs()

List the names of the variable attributes set in this instance

setncattr(name, value)

Set the value of a variable attribute

Parameters:
  • name (str) – name of the attribute

  • value (Any) – value to set

name = None
ncattr = {}
austaltools._netcdf.add_variable(dst: Dataset, svar: Variable, replace: dict[str, str | VariableSkeleton | None] = {}, compression: str | None = None)

Add a variable to the destination NetCDF dataset, with support for renaming, replacing, and setting compression options.

Parameters:
  • dst (netCDF4.Dataset) – Destination NetCDF dataset.

  • svar (netCDF4.Variable) – Source NetCDF variable to add.

  • replace (dict[str, str | VariableSkeleton | None]) –

    A dictionary that specifies variables to be replaced or removed:
    • If a variable name maps to a string, it is renamed.

    • If a variable name maps to a new variable object, it replaces the original.

    • If a variable name maps to None, the variable is omitted in the destination.

  • compression (str, optional) – Compression setting for the variable, defaults to None.

Returns:

True if the variable was added successfully, False if it already exists.

Return type:

bool

austaltools._netcdf.check_homhogenity(file_list, timevar=None, fail=False)

Check if all NetCDF datasets in the provided list have identical dimensions, attributes, variables, and variable attributes, except for the size of a specified dimension.

Parameters:
  • file_list (list of str) – List of file paths to the NetCDF datasets to check.

  • timevar (str, optional) – Name of the dimension variable that can vary in size across datasets, defaults to None.

  • fail (bool, optional) – If True, raises an exception when inconsistency is found, defaults to False.

Returns:

True if all datasets are consistent; otherwise, False.

Return type:

bool

Raises:

ValueError – If fail is True and inconsistency is detected.

austaltools._netcdf.copy_structure(src, dst, replace: dict[str, str | VariableSkeleton | None] = {}, convert: dict[str, Callable] = {}, resize: dict[str, int | None] = {}, compression: str | None = None, copy_data: bool = False) None

Copy the structure and optionally the data of a NetCDF source dataset to a destination dataset.

This function facilitates the duplication of NetCDF dataset structures, including dimensions, variables, and global attributes. Users can opt to modify certain aspects, such as renaming variables, applying transformations to data, or changing dimensions, to suit specific requirements.

Parameters:
  • src (netCDF4.Dataset) – The source dataset from which to copy the structure.

  • dst (netCDF4.Dataset) – The destination dataset where the structure will be copied.

  • replace (dict[str, str | VariableSkeleton | None]) –

    A dictionary that specifies variables to be replaced or removed:
    • If a variable name maps to a string, it is renamed.

    • If a variable name maps to a new variable object, it replaces the original.

    • If a variable name maps to None, the variable is omitted in the destination.

  • convert (dict[str, collections.abc.Callable]) – A dictionary that maps variable names to functions that transform their data values. These functions are applied to variable data during the copy.

  • resize (dict[str, int | None]) – Dictionary indicating if the copy should have a different size for one or more dimensions. In case a dimension also appears in replace, its name before replacement must be given here. To make a dimension unlimited (only one per file), the lenght must be changed to None. If empty, no change is made to dimension limits.

  • compression (str | None) – The compression method to apply to the copied variables, commonly set to zlib. Defaults to None

  • copy_data (bool, optional) –

    A boolean flag indicating whether the variable is copied with or without values in it. - If True, variable data is copied and transformed using convert. - If False, only the definition (dimensions, variables, attributes) is copied.

    Defaults to False.

Raises:

ValueError – Raised if an attempt is made to exclude a mandatory dimension without proper replacement.

austaltools._netcdf.copy_values(src, dst, replace: dict[str, str | VariableSkeleton | None] = {}, convert: dict[str, Callable] = {}) bool

Copy values from source NetCDF dataset to destination dataset with optional replacement and conversion of variable values.

Parameters:
  • src (netCDF4.Dataset) – Source NetCDF dataset.

  • dst (netCDF4.Dataset) – Destination NetCDF dataset.

  • replace (dict[str, str | VariableSkeleton | None]) – Mapping of source variable names to destination variable names or skeletons.

  • convert (dict[str, collections.abc.Callable]) – Mapping of source variable names to functions for converting data.

Returns:

True if the operation is successful.

Return type:

bool

austaltools._netcdf.get_dimensions(dataset: Dataset, timevar: str | None = None) dict

Helper function to get dimensions of a dataset

Parameters:
  • dataset (netCDF4.Dataset) – dataset

  • timevar (str, optional) – name of time variable (to be excluded)

Returns:

dataset dimension names and sizes

Return type:

dict[str, int]

austaltools._netcdf.get_global_attributes(dataset)

Helper function to get file attributes of a dataset

Parameters:

dataset (netCDF4.Dataset) – dataset

Returns:

dataset attribute names and values

Return type:

dict[str, str]

austaltools._netcdf.get_variable_attributes(dataset, var)

Helper function to get the attributes of variable of a dataset

Parameters:
  • dataset (netCDF4.Dataset) – dataset

  • var (str) – variable name

Returns:

dataset variable names and information values: - dimensions: the dimensions of the variable - attributes: names of the attributes of the variable - dtype: data type

Return type:

dict[dict[str, str|dict]]

austaltools._netcdf.get_variables(dataset)

Helper function to get variables information of a dataset

Parameters:

dataset (netCDF4.Dataset) – dataset

Returns:

dataset variable names and information values: - dimensions: the dimensions of the variable - attributes: names of the attributes of the variable - dtype: data type

Return type:

dict[dict[str, str|dict]]

austaltools._netcdf.merge_time(infiles: list | str, target: str, timevar: str = 'time', compression: str | None = None, allow_duplicates: bool = False, remove_source: bool = True)

Function that takes a list of input NetCDF files, each representing temporal slices of a dataset, and merges them into a single output file along a specified time dimension.

The time span covered by the input files can be overlpping and input files do not need to be sorted but the must not contain multiple entries for one time.

The check for duplicate times occuring in the input files can be disabled by allow_dulicates but when this option is chosen, it is not defined, which of the respective input records appears in the output file.

Parameters:
  • infiles (list of str) – List input files to be concatenated. These files should contain consistent structure and metadata except for the time dimension.

  • target (str) – The path to the output file that will store the result.

  • timevar (str, optional) – The name of the time dimension variable used. It is expected that this variable indicates the time period covered by each file. Defaults to “time”.

Returns:

Returns True upon successful concatenation of all files, indicating the result is stored correctly.

Return type:

bool

Raises:

ValueError – If the time dimension is inconsistent among the input files.

Logging:

Various stages of the process are logged including: - The initial setup and copying of structure from the first file. - Updates to the time dimension during concatenation. - Removal of temporary files post-completion.

Example:

Usage example for three input files: >>> merge_time([“file1.nc”, “file2.nc”], “output.nc”)

austaltools._netcdf.merge_variables(infiles: list[str], target: str, replace: dict[str, str | VariableSkeleton | None] = {}, convert: dict[str, Callable] = {}, compression: str | None = None, remove_source: bool = True)

Merge multiple netcdf files contained in a zip archive into one nc file.

Parameters:
  • infiles (str) – List of paths to the files to read

  • target (str) – path of the destination file to create

  • replace (dict[str, str | VariableSkeleton | None]) –

    A dictionary that specifies variables to be replaced or removed:
    • If a variable name maps to a string, it is renamed.

    • If a variable name maps to a new variable object, it replaces the original.

    • If a variable name maps to None, the variable is omitted in the destination.

  • convert (dict[str, collections.abc.Callable]) – A dictionary that maps variable names to functions that transform their data values. These functions are applied to variable data during the copy.

  • compression (str | None) – (optional) compression type, defaults to zlib

austaltools._netcdf.replace_cds_valid_time(compression: str)
austaltools._netcdf.subset_xy(infile, target, xmin: int | float | None = None, xmax: int | float | None = None, ymin: int | float | None = None, ymax: int | float | None = None, tmin: int | float | None = None, tmax: int | float | None = None, xvar: str | None = None, yvar: str | None = None, timevar: str | None = None, by_index: bool = False, replace: dict = {}, convert: dict = {}, compression: str | None = None)
austaltools._netcdf.timeconverter(old_unit, new_unit)

_storage

Module that provides funtions to manage the storage locations for configuration and datasets that serve as input for austaltools

austaltools._storage.find_writeable_storage(locs: str = None, stor: str = None) str

Finds a viable data storage directory and returns its path. If storage_path is provided, only this path is checked for existance.

Parameters:
  • locs (str) – Candidate locations

  • stor (str) – Storage directory expected at location

Returns:

path to a writable data storage directory

Return type:

str

austaltools._storage.location_has_storage(location, storage)

Check if location has storage :param location: path to storage location :type location: str :param storage: name of storage :type storage: str :return: True if location has storage :rtype: bool

austaltools._storage.locations_available(locs: list[str]) list[str]

Check whether locations exist :param locs: paths of storage location directories :type locs: list[str] :return: locations that exist :rtype: list[str]

austaltools._storage.locations_writable(locs: list[str]) list[str]

Check whether locations are writable :param locs: paths of storage location directories :type locs: list[str] :return: locations that are writable :rtype: list[str]

austaltools._storage.read_config(locs: str = None) dict
austaltools._storage.write_config(config: dict, locs: str = None) bool
austaltools._storage.CONFIG_FILE = 'austaltools.yaml'

Name of the optional austaltools config file

austaltools._storage.DIST_AUX_FILES = PosixPath('/builds/druee/austaltools/austaltools/data')

path to the auxiliary data files distributes alongside the code

austaltools._storage.SIMPLE_DEFAULT_EXTENT = 10.0

default terrain extent in austaltools simple

austaltools._storage.SIMPLE_DEFAULT_TERRAIN = 'DGM25-DE'

default terrain source in austaltools simple

austaltools._storage.SIMPLE_DEFAULT_WEATHER = 'CERRA'

default weather source in austaltools simple

austaltools._storage.SIMPLE_DEFAULT_YEAR = 2003

default weather year in austaltools simple

austaltools._storage.STORAGES = ['terrain', 'weather']

storage directories that hold data inside the storage locations

austaltools._storage.STORAGE_LOCATIONS = ['/opt/austaltools', '/root/.local/share/austaltools', '/root/.austaltools', '.']

Default locations where downloaded or cashed data are expected

austaltools._storage.STORAGE_TERRAIN = 'terrain'

storage directory that holds terrain data inside the storage locations

austaltools._storage.STORAGE_WAETHER = 'weather'

storage directory that holds weather data inside the storage locations

austaltools._storage.TEMP = '/tmp'

default path for temp files/dierctories

_tools

class austaltools._tools.Building(*args, **kwargs)

A class representing the Geometry of building

class austaltools._tools.Geometry(x: float = 0, y: float = 0, a: float = 0, b: float = 0, c: float = 0, w: float = 0)

A class that defines a geometric shape of the form that austal uses for sources and buildings. It is a cuboid of given widht, depth, and height, that may be rotated around its southwest corner.

Parameters:
  • x (float (optional), default 0.) – x position of the south-west corner

  • y (float (optional), default 0.) – y position of the south-west corner

  • a (float (optional), default 0.) – width (along x-axis) of the cuboid

  • b (float (optional), default 0.) – depth (along y-axis) of cuboid

  • c (float (optional), default 0.) – height (along z-axis) of cuboid

  • w (float (optional), default 0.) – rotation angle anticlockwise around the south-west corner

a = 0.0
b = 0.0
c = 0.0
w = 0.0
x = 0.0
y = 0.0
class austaltools._tools.GridASCII(file=None)

Class that represents a grid in ASCII format.

Example:
>>> grid = GridASCII("my_grid.asc")
>>> print(grid.header["ncols"])  # Access header values
>>> grid.write("output_grid.asc")  # Write grid data to a new file
read(file)

Reads the data from a GridASCII file in to the object.

Parameters:

file (str) – file name (optionally including path)

Raises:

ValueError if file is not a GridASCII file

write(file=None)

Writes the data the object into a GridASCII file.

Parameters:

file (str, optional) – file name (optionally including path). If missing, the name contained in the attribute name is used.

Raises:

ValueError if file is not a GridASCII file

data = None

grided data

file = None

Path to the ASCII file.

header = {'NODATA_value': None, 'cellsize': None, 'ncols': None, 'nrows': None, 'xllcorner': None, 'yllcorner': None}

Dictionary containing header information.

class austaltools._tools.SmartFormatter(prog, indent_increment=2, max_help_position=24, width=None, color=True)

Custom Help Formatter that maintains ‘\n’ in argument help.

class austaltools._tools.Source(*args, **kwargs)

A class representing the Geometry of pollutant source

class austaltools._tools.Spinner(text: str = None, step: int = None)
end()
spin()
spinner = '|/-\\\\'
step = 1
text = 'Working ...'
austaltools._tools.add_arguents_common_plot(parser: ArgumentParser) ArgumentParser

Add agruments to a parser

Parameters:

parser (argparse.ArgumentParser) – parser to add arguments to

Returns:

parser with added arguments

Return type:

argparse.ArgumentParser

austaltools._tools.add_location_opts(parser, stations=False, required=True)

This routine adds the input arguments defining a position:

Parameters:
  • parser (argpargse.ArgumentParser) – the arguemnt parser to add the options to

  • stations (bool) – WMO or DWD station numbers are accepted as positions

  • required – if a location specification is required type required: bool

Note:
  • dwd (str or None): DWD option, mutually exclusive with ‘wmo’ and required with ‘ele’.

  • wmo (str or None): WMO option, mutually exclusive with ‘dwd’ and required with ‘ele’.

  • ele (str or None): Element option, required with either ‘dwd’ or ‘wmo’.

  • year (int or None): Year option, required with ‘-L’, ‘-G’, ‘-U’, ‘-D’, or ‘-W’.

  • output (str or None): Output name, required with ‘-L’, ‘-G’, ‘-U’, ‘-D’, or ‘-W’.

  • station (str or None): Station option, only valid with ‘dwd’ or ‘wmo’.

austaltools._tools.analyze_name(name)

determine wind direction, stability class and grid index from the filename of a file in the wind library

Parameters:

name (str) – filename

Returns:

grid ID, wind direction, snd stability class

Return type:

tuple[int, int, int]

austaltools._tools.download(url, file, usr=None, pwd=None)

Downloads a file from a specified URL and saves it to a given local file path.

Parameters:
  • url (str) – The URL of the file to download.

  • file (str) – The local path, including the filename, where the downloaded file will be saved.

Returns:

The name of the file saved locally.

Return type:

str

Raises:

Exception – An exception is raised if the download fails (HTTP status code is not 200).

This function sends a GET request to the specified URL. If the request is successful (HTTP status code 200), it writes the content of the response to a file specified by the ‘file’ parameter. If the request fails, it raises an exception with information about the failure.

Example:
>>> try:
>>>     file_name = download('http://example.com/file.jpg', '/path/to/local/file.jpg')
>>>     print(f"Downloaded file saved as {file_name}")
>>> except Exception as e:
>>>     print(str(e))
austaltools._tools.download_earthdata(url, file, usr, pwd)

Downloads a file from a specified URL that needs authorization from earthdata.nasa.gov and saves it to a given local file path.

Parameters:
  • url (str) – The URL of the file to download.

  • file (str) – The local path, including the filename, where the downloaded file will be saved.

  • usr (str) – The username of the user to authenticate with.

  • pwd (str) – The password of the user to authenticate with.

Returns:

The name of the file saved locally.

Return type:

str

Raises:

Exception – An exception is raised if the download fails (HTTP status code is not 200).

This function sends a GET request to the specified URL. If the request is successful (HTTP status code 200), it writes the content of the response to a file specified by the ‘file’ parameter. If the request fails, it raises an exception with information about the failure.

Example:
>>> try:
>>>     file_name = download('https://n5eil01u.ecs.nsidc.org/'
>>>                          'MOST/MOD10A1.006/2016.12.31/'
>>>                          'MOD10A1.A2016366.h14v03.006.'
>>>                          '2017002110336.hdf.xml',
>>>                          '/path/to/local/hdf.xml',
>>>                          'sampleuser', 'verysecret')
>>>     print(f"Downloaded file saved as {file_name}")
>>> except Exception as e:
>>>     print(str(e))
austaltools._tools.estimate_elevation(lat, lon)

Quick estimation of elevation at a postion (for simple cli use)

Parameters:
  • lat (float|str) – position latitude

  • lon (float|str) – position longitude

Returns:

elevation in m

Return type:

float

austaltools._tools.expand_sequence(string)

Parse a string representing a sequence of values

Parameters:

string (str) – The string to parse. The string can take the form of a comma-seperated list <value>, <value>, …, <value> or the form <start>-<stop>/<step> (in which step is optional).

Returns:

The sequence of values as describe by the string

Return type:

list[int]

Example:

>>> expand_sequence("1,2,3,4,5")
[1, 2, 3, 4, 5]
>>> expand_sequence("1-9/2")
[1, 3, 5, 7, 9]
Note:

If the string is a comma-seperated list of integers, the values must in increasing order.

List form and start-stop form are mutually exclusive

Raises:

ValueError if the string contains any characters other than digits, “,”, “-”, or “/”

Raises:

ValueError if the string contains “,” and “-” or “/”

Raises:

ValueError if the string is comma-seperated list of integers, but is not ordered.

austaltools._tools.find_austxt(wdir='.')
austaltools._tools.find_z0_class(z0)

return index of roughness-length class that matches z0 best

Parameters:

z0 – actual roughness length

Returns:

index of matching roughness-length class

austaltools._tools.get_austxt(path=None)

Get AUSTAL configuration fron the file ‘austal.txt’ as dictionary

Parameters:

path – Configuration file. Defaults to

Type:

str, optional

Returns:

configuration

Return type:

dict

austaltools._tools.get_buildings(conf)

read the buildings defined in austal.txt and rerurn a list of Building objects.

Parameters:

conf (dict) – austal configuration as dict

Returns:

list of Building objects

Return type:

list[:Building]

Raises:

ValueError if the lists in each of the building-related configuration values are not all the same length.

austaltools._tools.jsonpath(json_obj, path)

Extracts values from specified keys or indices within a JSON object based on a given path.

Parameters:
  • json_obj – The JSON object (dict or list). This can be the result of json.loads() if using a JSON string.

  • path – A string representing the hierarchical path to the desired keys or indices. This path may include dictionary keys, list indices, and an optional filtering condition for dictionaries with specific key-value pairs.

Path Syntax

  • ‘key’: Selects the value associated with ‘key’ in a dictionary.

  • ‘[index]’: Selects the n-th element in a list (0-based index).

  • ‘key=value’: Selects dictionaries from a list of dictionaries where ‘key’ matches ‘value’.

  • Any combination of the above, separated by ‘/’ to navigate through nested structures.

  • an asterisk (*) may be specified instead of ‘key’ to match any key.

Returns:

A list containing the extracted values from the JSON object based on the input path.

Example:

>>> json_obj = {
>>>   "items": [
>>>       {"id": 1, "name": "Item 1"},
>>>       {"id": 2, "name": "Item 2", "extra": "yes"}
>>>   ]
>>>  }
...
>>> path_to_name = 'items/name'
>>> names = jsonpath(json_obj, path_to_name)
['Item 1', 'Item 2']
...
>>> path_to_extra = 'items/extra'
>>> extras = jsonpath(json_obj, path_to_extra)
['yes']
Note:

  • This function simplifies direct navigation and filtering in JSON objects but does not offer the full querying capabilities of more complex JSON querying libraries such as jsonpath-rw.

austaltools._tools.overlap(first: tuple[int | float, int | float], second: tuple[int | float, int | float]) bool
austaltools._tools.progress(itr: Iterable | None = None, desc: str = '', *args, **kwargs)

A progress bar that shows if tqdm.tqdm is available and the log level is below logging.DEBUG

Parameters:
  • itr (list or iterable) – iterator

  • desc (str (optional)) – string displayed in the progress bar

  • args – arguments to tqdm.tqdm

  • kwargs – keyword arguments to tqdm.tqdm

Returns:

decorated iterator or itr, depending on the conditions

Return type:

iterator

austaltools._tools.prompt_timeout(prompt, timeout, default: str = None)

Ask the user a question, wait timeout seconds for an answer, then continue

Parameters:
  • prompt (str) – Text to show as prompt

  • timeout (int) – Time to weit for an answer in seconds

  • default (str|None) – Default answer

Returns:

The answer. If the user typed in someting, it is returned, else the default.

Return type:

str|None

austaltools._tools.put_austxt(path='austal.txt', data=None)

Write AUSTAL configuration file ‘austal.txt’.

If the file exists, it will be rewritten. Configuration values in the file are kept unless data contains new values.

A Backup file is created wit a tilde appended to the filename.

Parameters:
  • path – File name. Defaults to ‘austal.txt’

  • data – Dictionary of configuration data. The keys are the AUSTAL configuration codes, the values are the configuration values as strings or space-separated lists

Type:

str, optional

Returns:

configuration

Return type:

dict

austaltools._tools.read_extracted_weather(csv_name: str) -> (<class 'float'>, <class 'float'>, <class 'float'>, <class 'str'>, <class 'str'>, <class 'pandas.DataFrame'>)

read weather data that were previously extracted from a dataset and stored into a csv file with specially crafted header line

Parameters:

csv_name (str) – file name and path

Returns:

latitude, longitude, elevation, roughness length z0, code of the original dataset, station name (if applicable), and the weather data

Return type:

float, float, float, str, str, pd.DataFrame

austaltools._tools.read_wind(file_info: dict, path: str = '.', grid: int = 0, centers: bool = False)

read wind library files

Parameters:
  • file_info (dict) – dict of lists containing names, stability classes, general wind directions, and grid indexes of all files.

  • path (str) – Wind library files are expected to be in this path

  • grid (int) – index of the grid for which to read the wind data

Returns:

u_grid, v_grid, axes

Return type:

tuple of (np.ndarray, np,dnarray, dict of lists of float)

austaltools._tools.slugify(value, allow_unicode=False)

Taken from https://github.com/django/django/blob/master/django/utils/text.py Convert to ASCII if ‘allow_unicode’ is False. Convert spaces or repeated dashes to single dashes. Remove characters that aren’t alphanumerics, underscores, or hyphens. Convert to lowercase. Also strip leading and trailing whitespace, dashes, and underscores.

austaltools._tools.str2bool(inp)

Convert a string to a boolean value.

accept the usual strings indicating the user’s consent or refusal

austaltools._tools.wind_files(path)

find wind library files

Parameters:

path (str) – path where to search. Wind library files are expected to be in this path or in the subdirectory ‘lib’ of this path.

Returns:

dict of lists containing names, stability classes, general wind directions, and grid indexes of all files.

Return type:

dict[str, list]

austaltools._tools.wind_library(path)

Find the directory that contains the wind library

Parameters:

path (str) – user supplied path

Returns:

path to wind library

Return type:

str

austaltools._tools.xmlpath(xml, path)

Extracts text or attribute values from specified elements within an XML string based on a given path. The function implements only a small subset of the XPath syntax.

Parameters:
  • xml – The XML document as a str.

  • path – A string representing the hierarchical path to the desired elements. This path may include element names, indexes in square brackets for direct child selection, and an optional attribute filter or attribute name preceded by :: for final value extraction.

Path Syntax

  • 'element': Selects all children named element from the current node.

  • 'element[index]': Selects the n-th element among its siblings (0-based index).

  • 'element[@attribute="value"]': Selects all element nodes where the attribute matches the specified value.

  • 'element::attribute': Retrieves the value of an attribute named attribute from the selected elements.

  • Any combination of the above, separated by ‘/’ to navigate through child elements.

Returns:

A list containing the extracted data from the XML, either the text content of selected elements or the values of specified attributes, depending on the input path.

Example:
>>> xmlstring = '''<data>
...                     <item id="1">Item 1</item>
...                     <item id="2" extra="yes">Item 2</item>
...                </data>'''
...
>>> pathtotext = 'item'
>>> textresult = xmlpath(xmlstring, pathtotext)
['Item 1', 'Item 2']
...
>>> pathtoattribute = 'item::id'
>>> attributeresult = xmlpath(xmlstring, pathtoattribute)
['1', '2']
Note:

  • This function is designed to operate on well-formed XML strings. Malformed XML might lead to unexpected results.

  • The function uses Python’s built-in XML handling capabilities and regular expressions for parsing and navigating the XML.

  • Namespace handling: If the XML contains namespaces, they are automatically recognized and handled for tag matching.

Raises:

The function itself does not explicitly raise exceptions, but misuse (e.g., incorrect XML or path syntax) can lead to exceptions thrown by the underlying XML or regex processing libraries.

austaltools._tools.AUSTAL_POLLUTANTS_DUST
Pollutant dust substances that are defined by austal:

“pm”, “as”, “cd”, “hg”, “ni”, “pb”, “tl”, “ba”, “dx”, “xx”

austaltools._tools.AUSTAL_POLLUTANTS_DUST_CLASSES

Pollutant dusts that are defined by austal, each composed of a substance and grain-size class 1-4 or x

austaltools._tools.AUSTAL_POLLUTANTS_GAS
Pollutant gases that are defined by austal:

“so2”, “nox”, “no”, “no2”, “nh3”, “hg0”, “hg”, “bzl”, “f”, “xx”, “odor”, “odor_050”, “odor_065”, “odor_075”, “odor_100”, “odor_150”

austaltools._tools.DEFAULT_COLORMAP = 'YlOrRd'

Default colors used for the commpon plot type

austaltools._tools.DEFAULT_WORKING_DIR = '.'

Default location for input and output

austaltools._tools.ELEVATION_API = 'https://api.open-elevation.com/api/v1/lookup'

api used for estimation of elevation

austaltools._tools.MAX_RETRY = 3

number of tries made to download a ceratin file

austaltools._tools.Z0_CLASSES = [0.01, 0.02, 0.05, 0.1, 0.2, 0.5, 1.0, 1.5, 2.0]

Surface roughness values corresponding to the roughness classes defined by austal

_wmo_metadata

This contains functions that provide metadata about weather stations extracted from the WMO OSCAR/surface database https://oscar.wmo.int/surface

austaltools._wmo_metadata.by_wigos_id(id: str) dict

Return station data entry of station identified by ist WIGOS ID.

Fills STATIONLIST stub if not yet filled.

Parameters:

id (str) – WIGOS ID

Returns:

station dataset

Return type:

dict

austaltools._wmo_metadata.by_wmo_id(id: (<class 'str'>, <class 'int'>)) dict

Return station data entry of station identified by its WMO number.

Fills STATIONLIST stub if not yet filled.

Parameters:

id (int|str) – WMO station number

Returns:

station dataset

Return type:

dict

austaltools._wmo_metadata.position(station: dict) tuple[float, float, float]

Return position of a station

Parameters:

station (dict) – station dataset

Returns:

Latitude, longitude and elevation

Return type:

(float, float, float)

austaltools._wmo_metadata.wigos_ids(station: dict) list[str]

Get the WIGOS IDs of a station

Parameters:

station (dict) – station dataset

Returns:

List of the WIGOS IDs

Return type:

list[str]

austaltools._wmo_metadata.wmo_stationinfo(wmoid: (<class 'str'>, <class 'int'>)) -> (<class 'float'>, <class 'float'>, <class 'float'>, <class 'str'>)

Return information about a station identified by its WMO number.

Parameters:

station (dict) – station dataset

Returns:

Latitude, longitude, elevation and name

Return type:

(float, float, float, str)

austaltools._wmo_metadata.OSCARFILE = '/builds/druee/austaltools/austaltools/data/wmo_stationlist.json'

File holding the WMO station data retrieved from WMO OSCAR database

austaltools._wmo_metadata.STATIONLIST = {}

dictionary holding the WMO station data. Empty stub to be filled later when needed