* removing reference to strncpy

* Fixing memory problems with test_toolkit

Fixes memory leaks and some minor refactoring.

* Update test_toolkit.hpp

removing crtdbg.h from header

* Update CMakeLists.txt

Restoring test_net_builder to test_toolkit.exe

* Cleaning up include statements adding crtdbg.h

* Fixing index error in test

* Add more analysis options to the API (issue #425)

* Fixed epanet2_enums.h

* Eliminates use of temporary linked lists to process Patterns & Curves (issue #449)

* Update input2.c

* Bug fix for 2Comp and LIFO tank mixing models (issue #448)

* Triggering build to update benchmarks

* Added new reg tests

Updating reference build id

* Initial commit list

generic linked list

* Update test_list.cpp

Tests are passing

* Update list.h

Adding documentation

* Fix typo

* Fixing bug in head_list

* Fixing indentation

* Fixed memory leak

Fixed memory leak in test_head_list

* Clean up and inline comments

* Updating file headers

* Update list.c

Updating in line comments.

* Update test_list.cpp

* Fixing indent

Spaces not tabs

* Update list.c

Fixing indent

* Update test_list.cpp

Updating file header to reflect proper attribution

* Expanding test

Added test where data is a struct

* Fixing indent

* Work in progress

* Reorganized to contain list abstraction

* Update list.c

* Refactoring head_list and tail_list

Simplifying head and tail list. Adding delete_node() to list API.

* Update test_list.cpp

* Update test_list.cpp

Fixing bug on gcc

* Fixing bug

* Fixing bug on gcc

* Update CMakeLists.txt

Adding test_list to ctest

* Fixes memory leak in EN_addnode() (#455)


* Fixing memory leak in EN_addnode()

* Separating test_net_builder from test_toolkit

Making test_net_builder a standalone test

* Removing BOOST_TEST_MAIN

* Work in progress

* Updating unit tests

* Fixing compilation bug on gcc

* Work in progress

compiles with warnings, definitely not working

* Update demand.h

* Work in progress

Implementing demand_list

* Work in progress

Creating function for validateing element ID strings

* Work in progress

Refactoring cstr_copy and adding test

* Update cstr_helper.c

fixing indentation

* Update cstr_helper.c

Fixing indentation

* Update test_cstrhelper.cpp

Fixed mem leak

* Adding element id validity checks

* Adding element id validity check

Adding checks for element set id functions

* Fixing build warnings on gcc

* Update errror code from 250 to 252

* Work in progress

Implementing generic demand pattern lists. Compiles but does not run.

* Update demand.c

Work in progress

* Return object index from EN_addnode and EN_addlink (issue #432)

Adds an output argument to EN_addnode and EN_addlink that returns the index of the newly added object.
Also refactors the validity check on object ID names.

* Fixed compilation errors

* Update test_node.cpp

* Create test_demand_data.cpp

* test demand data passing

* Work in progress

Fixing problems when demand lists are null

* Passing open and close test

* get/set demand name are passing

* Updated criteria for valid object ID name

* Work in progress

* Work in progress

Working on demand lists

* Work in progress

Fixing memory leaks
Unit tests passing

* Cleaning up build on gcc

* Cleaning up gcc build

* Fixing bug

* Working on gcc bug

Tests are passing on Appveyor

* Update inpfile.c

Trying to isolate bug

* GCC Bug

* Refactored xstrcpy function

* Update inpfile.c

Testing linux build

* Update epanet.c

Trying to isolate bug

* updating get demand name and write demands

Everything passing locally

* Update test_project.cpp

Isolating bug on gcc

* Isolating bug

Not writing demand section of input file should eliminate it

* Update demand.c

Fixing bug in get_category_name when category_name is NULL

* Restoring write_demands section in saveinpfile

* Update test_demand_data.cpp

Adding index to addnode calls. Fixing indent

* Update demand.c

* Reverted handling of default pattern

When creating demands, no pattern is marked with a zero. Then when data is adjusted it gets updated to default.

* Update epanet.c

Updating EN_getnodevalue() and EN_setnodevalue() to process the primary demand located at the head of the demand list

* Update demand.c

* Work in progress

code cleanup, addressed issue raised in review, and implemented EN_adddemand()

* Adding key and search to list

* Adding remove node method to generic list

* Adding remove demand method to toolkit

* Fix bug and test remove demand

* Fix problems with setting tank parameters (issue #464 )

* Fixed NULL pointer error, if no label is provided after the rule keyword.

* Create Makefile2.bat

Co-Authored-By: Demetrios G. Eliades <eldemet@users.noreply.github.com>
Co-Authored-By: Elad Salomons <selad@optiwater.com>

* Create LICENSE

* Fixed NULL pointer error, if no label is provided after the rule keyword.
Add NULL guard in freerules function. Use strncat and strncpy to ensure
the buffer lengths are adhered to.

* For "conditional" do delete a node connected to a link

For "conditional" deletion the node is deleted only if all of its links have been explicitly deleted beforehand #473

Co-Authored-By: Lew Rossman <lrossman@outlook.com>

* Create CODE_OF_CONDUCT.md

* Refactors the API's demand editing functions

* Update test_demand.cpp

* Update CODE_OF_CONDUCT.md

* Update rules.c

Fix broken win build script

* Updates to doc files

* Documentation edits

* Update Makefile.bat

Updates on the Microsoft SDK 7.1 compilation script to generate runepanet.exe and to use the \include\epanet2.def

* Update Makefile2.bat

Modified epanet2.exe to runepanet.exe, for consistency.

* Delete epanet2.def

Deleted the redundant `epanet2.def` file in the WindSDK folder

* Minor format change to status report

* Removing status reports from CI testing

* rm WinSDK folder and update Makefiles

Co-Authored-By: Demetrios G. Eliades <eldemet@users.noreply.github.com>

* Restored CI testing of status reports

* Removes _DEBUG directives from all source files

This commit removes the #ifdef _DEBUG statements at the top of all source code files per issue #482. It also updates the doc files to stress that the speedup observed for hydraulic analysis with the MMD node re-ordering method only applies to single period runs.

* Fix refactor of types.h

* updates authors

* updates AUTHORS and generator script

* Update run\CMakeLists.txt

* add help file win_build.md

Co-Authored-By: Elad Salomons <selad@optiwater.com>

* move win_build.md to root dir and renaiming to BUILDING.md

* Move BuildAndTest.md to the tools directory

* Update BUILDING.md

* Update BUILDING.md

* Update BUILDING.md

* Fixes problem with findpattern() function (issue #498)

* Change default properties for new pipe created with EN_addlink (issue #500)

* Numerous updates to project documentation

* Adds tank overflow feature

* Updating docs for tank overflow feature

* Updating VB include files

* Update input3.c

* Identifies overflowing tank in Status Report

* Update Makefile.bat

* Update Makefile2.bat

#508

* rethinking the python wrapper (#511)

* renames certain function parameter declarations and removes double pointer call from the deleteproject function

* deprecates conditonal compilation, removes python-specific headers and function renaming

* fixes tests and docs

* fixes test

* PDA fixes

* Minor update to force new CI test

* Another minor change to force another CI test

* Fixes Overflow and PDA tests not being run

* Fix EN_getElseaction and EN_setelseaction

Co-Authored-By: Andreas Ashikkis <andreasashikkis@users.noreply.github.com>

* Add -MT switch for CMake Windows build

* Updates to the docs

* Update BUILDING.md

* Build script updates

* Fixes EN_setlinkvalue bug

* fix in EN_deleteLink

when pipes are deleted via deletelink it also deletes comment of last link

Co-Authored-By: Pavlos Pavlou <pavlou.v.pavlos@ucy.ac.cy>

* rm set to null in functions EN_deletenode, EN_deletelink

* trial actions config

* Update ccpp.yml

* welcome to the Actions beta

* fixes mkstemp file handle-leaking behavior (#529)

* reverts posix include (#533)

... because it is not needed

* Fixes bugs in pump and demand head loss gradients

* Removed dependence on unistd.h in project.c

Travis CI failed because system could not find unistd.h.

* getTmpName() and xstrcpy() made safer

* Fixed use of strncpy in xstrcpy()

* Refactor of hydcoeffs.c

Simplifies and unifies how limit on head gradient at low flow is handled.

* Update ReleaseNotes2_2.md

* Return error if node/link name is too long (#535)

* co-authored with @ehsan-shafiee

* removes errant slashes

* Throws correct error for ID name too long

* Revert "Throws correct error for ID name too long"

This reverts commit 57b4873f5882cb9fd983f7e1e5a703b9e442cd74.

* fixes #534 by bubbling error codes up from add node/link internal functions

* fixes tests on Mac at least

* fixes improper success code

* Error 252 (not 250) returned for ID name too long.

From errors.dat: DAT(252,"invalid ID name")

* Fixes problems with EN_addnode() (#543)

See issue #542 . Also modifies unit test test_node to check that fixup works.

* Adds EN_getresultindex function to the API

See issue #546 . Also fixes a small bug in project.c.

* Adds link vertex get/set functions to the API

* Fixes to EN_addlink and EN_deletelink

* Updates the docs

* Bug fix for EN_setcurve

Adjusts params of any pump that uses the curve whose data is modified by EN_setcurve or EN_setcurvevalue (issue #550 ).

* Bug fix for EN_getrule

Fixes possible seg fault condition in EN_getrule. Also defines EN_MISSING as an API constant since it can be assigned internally to several variables that are retrievable by the API.

* Updating the docs

* Adds error check to EN_setheadcurveindex

See issue #556 .

* Update epanet2.pas

* Incorrect characterd

There was a character ’ instead of ' which created an error when compiling LaTeX.

* fixes a crashing issue in freedata (#559)

The freedata function used cached values for sizes of certain arrays found in the parser struct. However, now that the network is mutable, those values can become invalid. Relying instead on the actual array lengths prevents freeing unallocated memory, or ignoring cleanup on newly created elements.

* Bug fix for valvecheck function

See issue #561

* Restored prior update to project.c that got overwritten

* Fixed editing errors made to project.c

* PDF Guide

PDF users' guide for EPANET, and some minor corrections to readme.md to fix some formatting issues.

* HTML Users Guide

* Fixes a "copy over" bug in input3.c

The copying of one input line token over another was causing a compilation error under Clang. With v2.2 this copying is no longer needed so the line of code in question was simply deleted.

This commit also deletes the HTML and Latex output generated by running Doxygen that got added from the previous update to dev since they don't really belong in a source code repo.

* Correction made to doc files

The output-format.dox file was deprecated and not included in the doxyfile so it was deleted. The description of the format of of the Energy Usage section of the binary output in toolkit-files.dox was corrected.

* Update ReleaseNotes2_2.md

I added the v2.2 contributing authors to the notes. I checked PR's from 2017 and beyond and these were the only names I could find. Please append any one I might have missed.

* Fixes problem with re-opening const. HP pumps

See latest comments in issue #528. Also, the setlinkflow() function was deleted as it was never called anywhere.

* Update README.md (#539)

* Update README.md

* Update README.md

Some section titles were re-named to conform to GitHub guidelines and the OWA info was moved to a CREDITS section.

* Update README.md

Added link to the Community Forum page.

* Replaced OWA copyright with "(see AUTHORS)".

* Update AUTHORS

Copied format used by the OWA-SWMM project.

* Update README.md

The Disclaimer section was edited to reflect that there actually is a "collaborative" connection between USEPA and OWA.

* updates CI badges

* cleanup of readme links and unused files

* possessive vs contraction

* adding contributor to notes
This commit is contained in:
Sam Hatchett
2019-12-10 10:19:36 -05:00
committed by GitHub
parent 36381129e6
commit 4d8d82ddc2
166 changed files with 77192 additions and 21553 deletions

27
tools/.gitignore vendored Normal file
View File

@@ -0,0 +1,27 @@
# Python compiler files
*.py[cd]
# Python distribution and packaging
build/
dist/
temp/
*.cfg
*.egg-info/
*.whl
# SWIG generated files
epanet_output_wrap.c
epanet_output.py
# C compiler
*.o
*.dll
*.exe
# Eclipse project files and directories
.metadata/
.settings/
Release/
.project
.cproject
.pydevproject

183
tools/BuildAndTest.md Normal file
View File

@@ -0,0 +1,183 @@
## Building OWA EPANET From Source on Windows
by Michael E. Tryby
Created on: March 13, 2018
### Introduction
Building OWA's fork of EPANET from source is a basic skill that all developers
interested in contributing to the project should know how to perform. This
document describes the build process step-by-step. You will learn 1) how to
configure your machine to build the project locally; 2) how to obtain the
project files using git; 3) how to use cmake to generate build files and build
the project; and 4) how to use ctest and nrtest to perform unit and regression
testing on the build artifacts produced. Be advised, you will need local admin
privileges on your machine to follow this tutorial. Lets begin!
### Dependencies
Before the project can be built the required tools must be installed. The OWA
EPANET project adheres to a platform compiler policy - for each platform there
is a designated compiler. The platform compiler for Windows is Visual
Studio cl, for Linux gcc, and for Mac OS clang. These instructions describe how
to build EPANET on Windows. CMake is a cross platform build, testing, and packaging
tool that is used to automate the EPANET build workflow. Boost is a free portable
peer-reviewed C++ library. Unit tests are linked with Boost unit test libraries.
Lastly, git is a free and open source distributed version control system. Git must
be installed to obtain the project source code from the OWA EPANET repository
found on GitHub.
### Summary of Build Dependencies
- Platform Compiler
- Windows: Visual Studio 10.0 32-bit cl (version 16.00.40219.01 for 80x86)
- CMake (version 3.0.0 or greater)
- Boost Libraries (version 1.58 or greater)
- git (version 2.6.0 or greater)
### Build Procedure
1. Install Dependencies
* A. Install Visual Studio 2010 Express and SP1
Our current benchmark platform and compiler is Windows 32-bit Visual Studio 10
2010. Older versions of Visual Studio are available for download here:
https://www.visualstudio.com/vs/older-downloads/
A service pack for Visual Studio 10 2010 is available here:
https://www.microsoft.com/en-us/download/details.aspx?id=34677
* B. Install Boost
Boost binaries for Windows offer a convenient installation solution. Be sure to
select for download the boost installer exe that corresponds to the version of Visual Studio you have installed.
https://sourceforge.net/projects/boost/files/boost-binaries/1.58.0/
Although newer version of Boost are available, a link to Boost 1.58 is provided. This is the library version that the unit tests have been written against. Older versions of Boost may not work. The default install location for the Boost
Libraries is C:\local\boost_1_58_0
* C. Install Chocolatey, CMake, and git
Chocolatey is a Windows package manager that makes installing some of these
dependencies a little easier. When working with Chocolatey it is useful to have
local admin privileges. Chocolatey is available here:
https://chocolatey.org/install
Once Chocolately is installed, from a command prompt running with admin privileges
issue the following commands
```
\>choco install -y cmake --installargs 'ADD_CMAKE_TO_PATH=User'
\>choco install -y git --installargs /GitOnlyOnPath
\>refreshenv
```
* D. Common Problems
Using chocolatey requires a command prompt with admin privileges.
Check to make sure installed applications are on the command path.
Make note of the Boost Library install location.
2. Build The Project
As administrator open a Visual Studio 2010 Command Prompt. Change directories to
the location where you wish to build the EPANET project. Now we will issue a series
of commands to create a parent directory for the project root and clone the project
from OWA's GitHub repository.
* A. Clone the EPANET Repository
```
\>mkdir OWA
\>cd OWA
\>git clone --branch=dev https://github.com/OpenWaterAnalytics/EPANET.git
\>cd EPANET
```
The present working directory is now the project root EPANET. The directory contains
the same files that are visibly present in the GitHub Repo by browsing to the URL
https://github.com/OpenWaterAnalytics/EPANET/tree/dev.
Now we will create a build products directory and generate the platform build
file using cmake.
* B. Generate the build files
```
\>mkdir buildprod
\>cd buildprod
\>set BOOST_ROOT=C:\local\boost_1_58_0
\>cmake -G "Visual Studio 10 2010" -DBOOST_ROOT="%BOOST_ROOT%" -DBoost_USE_STATIC_LIBS="ON" ..
```
Now that the dependencies have been installed and the build system has been
generated, building EPANET is a simple CMake command.
* C. Build EPANET
\>cmake --build . --config Debug
* D. Common Problems
CMake may not be able to find the project CMakeLists.txt file or the Boost
library install location.
3. Testing
Unit Testing uses Boost Unit Test library and CMake ctest as the test runner.
Cmake has been configured to register tests with ctest as part of the build process.
* A. Unit Testing
```
\>cd tests
\>ctest -C Debug
```
The unit tests run quietly. Ctest redirects stdout to a log file which can be
found in the "tests\Testing\Temporary" folder. This is useful when a test fails.
Regression testing is somewhat more complicated because it relies on Python
to execute EPANET for each test and compare the binary files and report files.
To run regression tests first python and any required packages must be installed.
If Python is already installed on your local machine the installation of
miniconda can be skipped.
* B. Installing Regression Testing Dependencies
```
cd ..\..
\>choco install -y miniconda --installargs '/AddToPath=1'
\>choco install -y curl
\>choco install -y 7zip
\>refreshenv
\>pip install -r tools/requirements-appveyor.txt
```
With Python and the necessary dependencies installed, regression testing can be run
using the before-test and run-nrtest helper scripts found in the tools folder. The script
before-test stages the test and benchmark files for nrtest. The script run-nrtest calls
nrtest execute and nrtest compare to perform the regression test.
To run the executable under test, nrtest needs the absolute path to it and a
unique identifier for it such as the version number. The project cmake build places build
artifacts in the buildprod\bin\ folder. On Windows the build configuration "Debug" or
"Release" must also be indicated. On Windows it is also necessary to specify the path to
the Python Scripts folder so the nrtest execute and compare commands can be found. You
need to substitute bracketed fields below like "<build identifier>" with the values for
your setup.
* C. Regression Testing
```
\>tools\before-test.cmd <relative path to regression test location> <absolute path to exe under test> <build identifier>
\>tools\run-nrtest.cmd <absolute path to python scripts> <relative path to regression test location> <build identifier>
```
* D. Common Problems
The batch file before-test.cmd needs to run with admin privileges. The nrtest script complains when it can't find manifest files.
That concludes this tutorial on building OWA EPANET from source on Windows.
You have learned how to configure your machine satisfying project dependencies
and how to acquire, build, and test EPANET on your local machine. To be sure,
there is a lot more to learn, but this is a good start! Learn more about project
build and testing dependencies by following the links provided below.
### Further Reading
* Visual Studio - https://msdn.microsoft.com/en-us/library/dd831853(v=vs.100).aspx
* CMake - https://cmake.org/documentation/
* Boost - http://www.boost.org/doc/
* git - https://git-scm.com/doc
* Miniconda - https://conda.io/docs/user-guide/index.html
* curl - https://curl.haxx.se/
* 7zip - https://www.7-zip.org/
* nrtest - https://nrtest.readthedocs.io/en/latest/

46
tools/app-config.sh Executable file
View File

@@ -0,0 +1,46 @@
#! /bin/bash
#
# app-config.sh - Generates nrtest app configuration file for test executable
#
# Date Created: 3/19/2018
#
# Author: Michael E. Tryby
# US EPA - ORD/NRMRL
#
# Arguments:
# 1 - absolute path to test executable
#
# NOT IMPLEMENTED YET
# 2 - test executable version number
# 3 - build description
#
unameOut="$(uname -s)"
case "${unameOut}" in
Linux*) ;&
Darwin*) abs_build_path=$1
test_cmd="runepanet"
;;
MINGW*) ;&
MSYS*) # Remove leading '/c' from file path for nrtest
abs_build_path="$( echo "$1" | sed -e 's#/c##' )"
test_cmd="runepanet.exe"
;;
*) # Machine unknown
esac
version=""
build_description=""
cat<<EOF
{
"name" : "epanet",
"version" : "${version}",
"description" : "${build_description}",
"setup_script" : "",
"exe" : "${abs_build_path}/${test_cmd}"
}
EOF

95
tools/before-test.cmd Normal file
View File

@@ -0,0 +1,95 @@
::
:: before-test.cmd - Prepares AppVeyor CI worker to run epanet regression tests
::
:: Date Created: 4/3/2018
::
:: Author: Michael E. Tryby
:: US EPA - ORD/NRMRL
::
:: Arguments:
:: 1 - (platform)
:: 2 - (build identifier for reference)
:: 3 - (build identifier for software under test)
:: 4 - (version identifier for software under test)
:: 5 - (relative path regression test file staging location)
::
:: Note:
:: Tests and benchmark files are stored in the epanet-example-networks repo.
:: This script retrieves them using a stable URL associated with a GitHub
:: release and stages the files for nrtest to run. The script assumes that
:: before-test.cmd and gen-config.cmd are located together in the same folder.
::
@echo off
setlocal EnableExtensions
IF [%1]==[] ( set PLATFORM=
) ELSE ( set "PLATFORM=%~1" )
IF [%2]==[] ( echo "ERROR: REF_BUILD_ID must be defined" & exit /B 1
) ELSE (set "REF_BUILD_ID=%~2" )
IF [%3]==[] ( set "SUT_BUILD_ID=local"
) ELSE ( set "SUT_BUILD_ID=%~3" )
IF [%4]==[] (set SUT_VERSION=
) ELSE ( set "SUT_VERSION=%~4" )
IF [%5]==[] ( set "TEST_HOME=nrtestsuite"
) ELSE ( set "TEST_HOME=%~5" )
echo INFO: Staging files for regression testing
:: determine SUT executable path
set "SCRIPT_HOME=%~dp0"
:: TODO: This may fail when there is more than one cmake buildprod folder
FOR /D /R "%SCRIPT_HOME%..\" %%a IN (*) DO IF /i "%%~nxa"=="bin" set "BUILD_HOME=%%a"
set "SUT_PATH=%BUILD_HOME%\Release"
:: determine platform from CMakeCache.txt
IF NOT DEFINED PLATFORM (
FOR /F "tokens=*" %%p IN ( 'findstr CMAKE_SHARED_LINKER_FLAGS:STRING %BUILD_HOME%\..\CmakeCache.txt' ) DO ( set "FLAG=%%p" )
FOR /F "delims=: tokens=3" %%m IN ( 'echo %FLAG%' ) DO IF "%%m"=="x64" ( set "PLATFORM=win64" ) ELSE ( set "PLATFORM=win32" )
)
:: hack to determine latest tag in epanet-example-networks repo
set "LATEST_URL=https://github.com/OpenWaterAnalytics/epanet-example-networks/releases/latest"
FOR /F delims^=^"^ tokens^=2 %%g IN ('curl --silent %LATEST_URL%') DO ( set "LATEST_TAG=%%~nxg" )
IF defined LATEST_TAG (
set "TESTFILES_URL=https://github.com/OpenWaterAnalytics/epanet-example-networks/archive/%LATEST_TAG%.zip"
set "BENCHFILES_URL=https://github.com/OpenWaterAnalytics/epanet-example-networks/releases/download/%LATEST_TAG%/benchmark-%PLATFORM%-%REF_BUILD_ID%.zip"
) ELSE ( echo ERROR: Unable to determine latest tag & EXIT /B 1 )
:: create a clean directory for staging regression tests
IF exist %TEST_HOME% (
rmdir /s /q %TEST_HOME%
)
mkdir %TEST_HOME%
cd %TEST_HOME%
:: retrieve epanet-examples for regression testing
curl -fsSL -o examples.zip %TESTFILES_URL%
:: retrieve epanet benchmark results
curl -fsSL -o benchmark.zip %BENCHFILES_URL%
:: extract tests, benchmarks, and manifest
7z x examples.zip *\epanet-tests\* > nul
7z x benchmark.zip -obenchmark\ > nul
7z e benchmark.zip -o. manifest.json -r > nul
:: set up symlink for tests directory
mklink /D .\tests .\epanet-example-networks-%LATEST_TAG:~1%\epanet-tests > nul
:: generate json configuration file for software under test
mkdir apps
%SCRIPT_HOME%\gen-config.cmd %SUT_PATH% %PLATFORM% %SUT_BUILD_ID% %SUT_VERSION% > apps\epanet-%SUT_BUILD_ID%.json

103
tools/before-test.sh Executable file
View File

@@ -0,0 +1,103 @@
#! /bin/bash
#
# before-test.sh - Prepares Travis CI worker to run epanet regression tests
#
# Date Created: 04/04/2018
#
# Author: Michael E. Tryby
# US EPA - ORD/NRMRL
#
# Arguments:
# 1 - (platform)
# 2 - (build id for reference)
# 3 - (build id for software under test)
# 4 - (version id for software under test)
# 5 - (relative path regression test file staging location)
#
# Note:
# Tests and benchmark files are stored in the epanet-example-networks repo.
# This script retreives them using a stable URL associated with a release on
# GitHub and stages the files for nrtest to run. The script assumes that
# before-test.sh and gen-config.sh are located together in the same folder.
if [ -z "$1" ]; then
unset PLATFORM;
else
PLATFORM=$1;
fi
if [ -z "$2" ]; then
echo "ERROR: REF_BUILD_ID must be defined"; exit 1;
else
REF_BUILD_ID=$2;
fi
if [ -z "$3" ]; then
SUT_BUILD_ID="local";
else
SUT_BUILD_ID=$3;
fi
if [ -z "$4" ]; then
SUT_VERSION="unknown";
else
SUT_VERSION=$4; fi
if [ -z "$5" ]; then
TEST_HOME="nrtestsuite";
else
TEST_HOME=$5; fi
echo INFO: Staging files for regression testing
SCRIPT_HOME="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
BUILD_HOME="$(dirname "$SCRIPT_HOME")"
SUT_PATH=(`find "$BUILD_HOME" -name "bin" -type d`)
# TODO: determine platform
# hack to determine latest tag from GitHub
LATEST_URL="https://github.com/OpenWaterAnalytics/epanet-example-networks/releases/latest"
LATEST_TAG="$(curl -sI "${LATEST_URL}" | grep -Po 'tag\/\K(v\S+)')"
TEST_URL="https://codeload.github.com/OpenWaterAnalytics/epanet-example-networks/tar.gz/${LATEST_TAG}"
BENCH_URL="https://github.com/OpenWaterAnalytics/epanet-example-networks/releases/download/${LATEST_TAG}/benchmark-${PLATFORM}-${REF_BUILD_ID}.tar.gz"
# create a clean directory for staging regression tests
# create a clean directory for staging regression tests
if [ -d "${TEST_HOME}" ]; then
rm -rf "${TEST_HOME}"
fi
mkdir "${TEST_HOME}"
cd "${TEST_HOME}" || exit 1
# retrieve epanet-examples for regression testing
if ! curl -fsSL -o examples.tar.gz "${TEST_URL}"; then
echo "ERROR: curl - ${TEST_URL}" & exit 2
fi
# retrieve epanet benchmark results
if ! curl -fsSL -o benchmark.tar.gz "${BENCH_URL}"; then
echo "ERROR: curl - ${BENCH_URL}" & exit 3
fi
# extract tests, benchmarks, and manifest
tar xzf examples.tar.gz
ln -s "epanet-example-networks-${LATEST_TAG:1}/epanet-tests" tests
mkdir benchmark
tar xzf benchmark.tar.gz -C benchmark
tar xzf benchmark.tar.gz --wildcards --no-anchored --strip-components=1 '*/manifest.json' -C .
# generate json configuration file for software under test
mkdir apps
${SCRIPT_HOME}/gen-config.sh ${SUT_PATH} ${PLATFORM} ${SUT_BUILD_ID} ${SUT_VERSION} > apps/epanet-${SUT_BUILD_ID}.json

41
tools/gen-config.cmd Normal file
View File

@@ -0,0 +1,41 @@
::
:: gen-config.cmd - Generated nrtest app configuration file for test executable
::
:: Date Created: 1/8/2018
::
:: Author: Michael E. Tryby
:: US EPA - ORD/NRMRL
::
:: Arguments:
:: 1 - absolute path to test executable (valid path seperator for nrtest is "/")
:: 2 - (platform)
:: 3 - (build identifier for SUT)
:: 4 - (commit hash string)
@echo off
setlocal
:: swmm target created by the cmake build script
set TEST_CMD=runepanet.exe
:: remove quotes from path and convert backward to forward slash
set ABS_BUILD_PATH=%~1
set ABS_BUILD_PATH=%ABS_BUILD_PATH:\=/%
IF [%2]==[] ( set "PLATFORM=unknown"
) ELSE ( set "PLATFORM=%~2" )
IF [%3]==[] ( set "BUILD_ID=unknown"
) ELSE ( set "BUILD_ID=%~3" )
IF [%4]==[] ( set "VERSION=unknown"
) ELSE ( set "VERSION=%~4" )
echo {
echo "name" : "epanet",
echo "version" : "%VERSION%",
echo "description" : "%PLATFORM% %BUILD_ID%",
echo "setup_script" : "",
echo "exe" : "%ABS_BUILD_PATH%/%TEST_CMD%"
echo }

42
tools/gen-config.sh Executable file
View File

@@ -0,0 +1,42 @@
#! /bin/bash
#
# gen-config.sh - Generates nrtest app configuration file for test executable
#
# Date Created: 10/16/2017
#
# Author: Michael E. Tryby
# US EPA - ORD/NRMRL
#
# Arguments:
# 1 - absolute path to test executable
# 2 - platform
# 3 - SUT build id
# 4 - SUT version id
#
unameOut="$(uname -s)"
case "${unameOut}" in
Linux*) ;&
Darwin*) abs_build_path=$1
test_cmd="runepanet"
;;
MINGW*) ;&
MSYS*) # Remove leading '/c' from file path for nrtest
abs_build_path="$( echo "$1" | sed -e 's#/c##' )"
test_cmd="runepanet.exe"
;;
*) # Machine unknown
esac
cat<<EOF
{
"name" : "epanet",
"version" : "$4",
"description" : "$2 $3",
"setup_script" : "",
"exe" : "${abs_build_path}/${test_cmd}"
}
EOF

151
tools/nrtest-epanet/main.py Normal file
View File

@@ -0,0 +1,151 @@
import numpy as np
import time
import cStringIO
import itertools as it
# project import
import nrtest_epanet.output_reader as er
def result_compare(path_test, path_ref, comp_args):
isclose = True
close = 0
notclose = 0
equal = 0
total = 0
output = cStringIO.StringIO()
eps = np.finfo(float).eps
min_cdd = 100.0
start = time.time()
test_reader = er.output_generator(path_test)
ref_reader = er.output_generator(path_ref)
for test, ref in it.izip(test_reader, ref_reader):
total += 1
if total%100000 == 0:
print(total)
if len(test[0]) != len(ref[0]):
raise ValueError('Inconsistent lengths')
# Skip results if they are zero or equal
#if np.array_equal(test, ref):
# equal += 1
# continue
else:
try:
diff = np.fabs(np.subtract(test[0], ref[0]))
idx = np.unravel_index(np.argmax(diff), diff.shape)
if diff[idx] != 0.0:
tmp = - np.log10(diff[idx])
if tmp < min_cdd:
min_cdd = tmp;
except AssertionError as ae:
notclose += 1
output.write(str(ae))
output.write('\n\n')
continue
stop = time.time()
print(output.getvalue())
output.close()
print('mincdd: %d in %f (sec)' % (np.floor(min_cdd), (stop - start)))
#print('equal: %d close: %d notclose: %d total: %d in %f (sec)\n' %
# (equal, close, notclose, total, (stop - start)))
if notclose > 0:
print('%d differences found\n' % notclose)
isclose = False
return isclose
from nrtest.testsuite import TestSuite
from nrtest.compare import compare_testsuite, validate_testsuite
def nrtest_compare(path_test, path_ref, (comp_args)):
ts_new = TestSuite.read_benchmark(path_test)
ts_old = TestSuite.read_benchmark(path_ref)
if not validate_testsuite(ts_new) or not validate_testsuite(ts_old):
exit(1)
try:
# logging.info('Found %i tests' % len(ts_new.tests))
compatible = compare_testsuite(ts_new, ts_old, comp_args[0], comp_args[1])
except KeyboardInterrupt:
# logging.warning('Process interrupted by user')
compatible = False
# else:
# logging.info('Finished')
# Non-zero exit code indicates failure
exit(not compatible)
def nrtest_execute(app_path, test_path, output_path):
import logging
import glob
from os import listdir
from os.path import exists, isfile, isdir, join
from nrtest.execute import execute_testsuite, validate_testsuite
# for path in test_path + [app_path]:
# if not exists(path):
# logging.error('Could not find path: "%s"' % path)
test_dirs = glob.glob(test_path)
test_files = [p for p in test_path if isfile(p)]
test_files += [p for d in test_dirs for p in glob.glob(d + '*.json')]
# if p.endswith('.json')]
test_files = list(set(test_files)) # remove duplicates
ts = TestSuite.read_config(app_path, test_files, output_path)
if not validate_testsuite(ts):
exit(1)
try:
logging.info('Found %i tests' % len(test_files))
success = execute_testsuite(ts)
ts.write_manifest()
except KeyboardInterrupt:
logging.warning('Process interrupted by user')
success = False
else:
logging.info('Finished')
# Non-zero exit code indicates failure
exit(not success)
import report_diff as rd
if __name__ == "__main__":
# app_path = "apps\\swmm-5.1.11.json"
# test_path = "tests\\examples\\example1.json"
# output_path = "benchmarks\\test\\"
# nrtest_execute(app_path, test_path, output_path)
# test_path = "C:\\Users\\mtryby\\Workspace\\GitRepo\\Local\\epanet-testsuite\\benchmarks\\v2011a"
# ref_path = "C:\\Users\\mtryby\\Workspace\\GitRepo\\Local\\epanet-testsuite\\benchmarks\\v2012"
# print(nrtest_compare(test_path, ref_path, (0.001, 0.0)))
benchmark_path = "C:\\Users\\mtryby\\Workspace\\GitRepo\\michaeltryby\\epanet-lr\\nrtestsuite\\benchmarks\\"
path_test = benchmark_path + "epanet-220dev\\example2\\example2.out"
path_ref = benchmark_path + "epanet-2012\\example2\\example2.out"
#result_compare(path_test, path_ref, (0.001, 0.0))
rd.report_diff(path_test, path_ref, 2)

View File

@@ -0,0 +1,166 @@
# -*- coding: utf-8 -*-
#
# __init__.py - nrtest_epanet module
#
# Author: Michael E. Tryby
# US EPA - ORD/NRMRL
#
'''
Numerical regression testing (nrtest) plugin for comparing EPANET binary results
files and EPANET text based report files.
'''
# system imports
import itertools as it
# third party imports
import header_detail_footer as hdf
import numpy as np
# project import
import nrtest_epanet.output_reader as ordr
__author__ = "Michael Tryby"
__copyright__ = "None"
__credits__ = "Colleen Barr, Maurizio Cingi, Mark Gray, David Hall, Bryant McDonnell"
__license__ = "CC0 1.0 Universal"
__version__ = "0.5.0"
__date__ = "September 6, 2017"
__maintainer__ = "Michael Tryby"
__email__ = "tryby.michael@epa.gov"
__status = "Development"
def epanet_allclose_compare(path_test, path_ref, rtol, atol):
'''
Compares results in two EPANET binary files using the comparison criteria
described in the numpy assert_allclose documentation.
(test_value - ref_value) <= atol + rtol * abs(ref_value)
Returns true if all of the results in the two binary files meet the
comparison criteria; otherwise, an AssertionError is thrown.
Numpy allclose is quite expensive to evaluate. Test and reference results
are checked to see if they are equal before being compared using the
allclose criteria. This reduces comparison times significantly.
Arguments:
path_test - path to result file being tested
path_ref - path to reference result file
rtol - relative tolerance
atol - absolute tolerance
Returns:
True
Raises:
ValueError()
AssertionError()
...
'''
for (test, ref) in it.izip(ordr.output_generator(path_test),
ordr.output_generator(path_ref)):
if len(test[0]) != len(ref[0]):
raise ValueError('Inconsistent lengths')
# Skip over arrays that are equal
if np.array_equal(test[0], ref[0]):
continue
else:
np.testing.assert_allclose(test[0], ref[0], rtol, atol)
return True
def epanet_mincdd_compare(path_test, path_ref, rtol, atol):
'''
Compares the results of two EPANET binary files using a correct decimal
digits (cdd) comparison criteria:
min cdd(test, ref) >= atol
Returns true if min cdd in the file is greater than or equal to atol,
otherwise an AssertionError is thrown.
Arguments:
path_test - path to result file being testedgit
path_ref - path to reference result file
rtol - ignored
atol - minimum allowable cdd value (i.e. 3)
Returns:
True
Raises:
ValueError()
AssertionError()
'''
min_cdd = 100.0
for (test, ref) in it.izip(ordr.output_generator(path_test),
ordr.output_generator(path_ref)):
if len(test[0]) != len(ref[0]):
raise ValueError('Inconsistent lengths')
# Skip over arrays that are equal
if np.array_equal(test[0], ref[0]):
continue
else:
diff = np.fabs(np.subtract(test[0], ref[0]))
idx = np.unravel_index(np.argmax(diff), diff.shape)
if diff[idx] != 0.0:
tmp = - np.log10(diff[idx])
if tmp < min_cdd:
min_cdd = tmp;
if np.floor(min_cdd) >= atol:
return True
else:
raise AssertionError('min_cdd=%d less than atol=%g' % (min_cdd, atol))
def epanet_report_compare(path_test, path_ref, rtol, atol):
'''
Compares results in two report files ignoring contents of header and footer.
Note: Header is 11 lines with report summary turned off. This test will fail
if the report summary is turned on because a time stamp is being written
immediately after it.
Arguments:
path_test - path to result file being tested
path_ref - path to reference result file
rtol - ignored
atol - ignored
Returns:
True or False
Raises:
HeaderError()
FooterError()
RunTimeError()
...
'''
HEADER = 10
FOOTER = 2
with open(path_test ,'r') as ftest, open(path_ref, 'r') as fref:
for (test_line, ref_line) in it.izip(hdf.parse(ftest, HEADER, FOOTER)[1],
hdf.parse(fref, HEADER, FOOTER)[1]):
if test_line != ref_line:
return False
return True

View File

@@ -0,0 +1,71 @@
# -*- coding: utf-8 -*-
#
# output_reader.py
#
# Date Created: Aug 31, 2016
#
# Author: Michael E. Tryby
# US EPA - ORD/NRMRL
#
'''
The module output_reader provides the class used to implement the output
generator.
'''
# project import
import epanet_output as oapi
def output_generator(path_ref):
'''
The output_generator is designed to iterate over an EPANET binary file and
yield element attributes. It is useful for comparing contents of binary
files for numerical regression testing.
The generator yields a Python tuple containing an array of element
attributes and a tuple containing the element type, period, and attribute.
Arguments:
path_ref - path to result file
Raises:
Exception()
...
'''
with OutputReader(path_ref) as br:
for period_index in range(0, br.report_periods()):
for element_type in oapi.ElementType:
for attribute in br.elementAttributes[element_type]:
yield (br.element_attribute(element_type, period_index, attribute),
(element_type, period_index, attribute))
class OutputReader():
'''
Provides a minimal API used to implement output_generator.
'''
def __init__(self, filename):
self.filepath = filename
self.handle = None
self.elementAttributes = {oapi.ElementType.NODE: oapi.NodeAttribute,
oapi.ElementType.LINK: oapi.LinkAttribute}
self.getElementAttribute = {oapi.ElementType.NODE: oapi.enr_get_node_attribute,
oapi.ElementType.LINK: oapi.enr_get_link_attribute}
def __enter__(self):
self.handle = oapi.enr_init()
oapi.enr_open(self.handle, self.filepath.encode())
return self
def __exit__(self, type, value, traceback):
self.handle = oapi.enr_close()
def report_periods(self):
return oapi.enr_get_times(self.handle, oapi.Time.NUM_PERIODS)
def element_attribute(self, element_type, time_index, attribute):
return self.getElementAttribute[element_type](self.handle, time_index, attribute)

View File

@@ -0,0 +1,102 @@
# -*- coding: utf-8 -*-
#
# report_diff.py
#
# Date Created: July 11, 2018
#
# Author: Michael E. Tryby
# US EPA - ORD/NRMRL
#
# system imports
import itertools as it
# third party imports
import numpy as np
# project imports
import nrtest_epanet.output_reader as ordr
def _binary_diff(path_test, path_ref, min_cdd):
for (test, ref) in it.izip(ordr.output_generator(path_test),
ordr.output_generator(path_ref)):
if len(test[0]) != len(ref[0]):
raise ValueError('Inconsistent lengths')
# Skip over arrays that are equal
if np.array_equal(test[0], ref[0]):
continue
else:
lre = _log_relative_error(test[0], ref[0])
idx = np.unravel_index(np.argmin(lre), lre.shape)
if lre[idx] < min_cdd:
_print_diff(idx, lre, test, ref)
return
def _log_relative_error(q, c):
'''
Computes log relative error, a measure of numerical accuracy.
Single precision machine epsilon is between 2^-24 and 2^-23.
Reference:
McCullough, B. D. "Assessing the Reliability of Statistical Software: Part I."
The American Statistician, vol. 52, no. 4, 1998, pp. 358-366.
'''
diff = np.subtract(q, c)
tmp_c = np.copy(c)
# If ref value is small compute absolute error
tmp_c[np.fabs(tmp_c) < 1.0e-6] = 1.0
re = np.fabs(diff)/np.fabs(tmp_c)
# If re is tiny set lre to number of digits
re[re < 1.0e-7] = 1.0e-7
# If re is very large set lre to zero
re[re > 2.0] = 1.0
lre = np.negative(np.log10(re))
# If lre is negative set to zero
lre[lre < 1.0] = 0.0
return lre
def _print_diff(idx, lre, test, ref):
idx_val = (idx[0], ref[1])
test_val = (test[0][idx[0]])
ref_val = (ref[0][idx[0]])
diff_val = (test_val - ref_val)
lre_val = (lre[idx[0]])
print("Idx: %s\nSut: %e Ref: %e Diff: %e LRE: %.2f\n"
% (idx_val, test_val, ref_val, diff_val, lre_val))
def report(args):
_binary_diff(args.test, args.ref, args.mincdd)
if __name__ == '__main__':
from argparse import ArgumentParser
parser = ArgumentParser(description='EPANET benchmark difference reporting')
parser.set_defaults(func=report)
parser.add_argument('-t', '--test', default=None,
help='Path to test benchmark')
parser.add_argument('-r', '--ref', default=None,
help='Path to reference benchmark')
parser.add_argument('-mc', '--mincdd', type=int, default=3,
help='Minimum correct decimal digits')
args = parser.parse_args()
args.func(args)

View File

@@ -0,0 +1,45 @@
# -*- coding: utf-8 -*-
#
# setup.py
#
# Created on Aug 30, 2016
# Author: Michael E. Tryby
# US EPA - ORD/NRMRL
#
''' Setup up script for nrtest_epanet package. '''
try:
from setuptools import setup
except ImportError:
from distutils.core import setup
entry_points = {
'nrtest.compare': [
'epanet allclose = nrtest_epanet:epanet_allclose_compare',
'epanet mincdd = nrtest_epanet:epanet_mincdd_compare',
'epanet report = nrtest_epanet:epanet_report_compare',
# Add entry point for new comparison functions here
]
}
setup(
name='nrtest-epanet',
version='0.5.0',
description="EPANET extension for nrtest",
author="Michael E. Tryby",
author_email='tryby.michael@epa.gov',
url='https://github.com/USEPA',
packages=['nrtest_epanet',],
entry_points=entry_points,
install_requires=[
'header_detail_footer>=2.3',
'nrtest>=0.2.0',
'numpy>=1.7.0',
'epanet_output'
],
keywords='nrtest_epanet'
)

View File

@@ -1 +0,0 @@
'''

File diff suppressed because one or more lines are too long

View File

@@ -1,502 +0,0 @@
//-----------------------------------------------------------------------------
//
// outputapi.c -- API for reading results from EPANet binary output file
//
// Version: 0.10
// Date: 08/05/14
// Date: 05/21/14
//
// Author: Michael E. Tryby
// US EPA - NRMRL
//
// Purpose: Output API provides an interface for retrieving results from
// an EPANet binary output file.
//
//-----------------------------------------------------------------------------
#include <stdlib.h>
#include <stdio.h>
#include <stdbool.h>
#include <string.h>
#include "outputapi.h"
#define INT4 int
#define REAL4 float
#define RECORDSIZE 4 // number of bytes per file record
#define MEMCHECK(x) (((x) == NULL) ? 411 : 0 )
#define MINNREC 14 // minimum allowable number of records
#define NNODERESULTS 4 // number of result fields for nodes
#define NLINKRESULTS 8 // number of result fields for links
struct ENResultsAPI {
char name[MAXFNAME + 1]; // file path/name
bool isOpened; // current state (CLOSED = 0, OPEN = 1)
FILE *file; // FILE structure pointer
INT4 nodeCount, tankCount, linkCount, pumpCount, valveCount;
INT4 reportStart, reportStep, simDuration, nPeriods;
INT4 flowFlag, pressFlag;
INT4 outputStartPos; // starting file position of output data
INT4 bytesPerPeriod; // bytes saved per simulation time period
};
//-----------------------------------------------------------------------------
// Local functions
//-----------------------------------------------------------------------------
float getNodeValue(ENResultsAPI*, int, int, ENR_NodeAttribute);
float getLinkValue(ENResultsAPI*, int, int, ENR_LinkAttribute);
ENResultsAPI* DLLEXPORT ENR_alloc(void)
{
return malloc(sizeof(struct ENResultsAPI));
}
int DLLEXPORT ENR_open(ENResultsAPI* enrapi, const char* path)
//
// Purpose: Open the output binary file and read epilogue
//
{
int magic1, magic2, errCode, version;
strncpy(enrapi->name, path, MAXFNAME);
enrapi->isOpened = false;
// Attempt to open binary output file for reading only
if ((enrapi->file = fopen(path, "rb")) == NULL)
return 434;
else
enrapi->isOpened = true;
// Fast forward to end and check for minimum number of records
fseek(enrapi->file, 0L, SEEK_END);
if (ftell(enrapi->file) < MINNREC*RECORDSIZE) {
fclose(enrapi->file);
// Error run terminated no results in binary file
return 435;
}
// Fast forward to end and read file epilogue
fseek(enrapi->file, -3*RECORDSIZE, SEEK_END);
fread(&(enrapi->nPeriods), RECORDSIZE, 1, enrapi->file);
fread(&errCode, RECORDSIZE, 1, enrapi->file);
fread(&magic2, RECORDSIZE, 1, enrapi->file);
// Rewind and read magic number from beginning of file
fseek(enrapi->file, 0L, SEEK_SET);
fread(&magic1, RECORDSIZE, 1, enrapi->file);
// Perform error checks
if (magic1 != magic2 || errCode != 0 || enrapi->nPeriods == 0) {
fclose(enrapi->file);
// Error run terminated no results in binary file
return 435;
}
// Otherwise read network size
fread(&version, RECORDSIZE, 1, enrapi->file);
fread(&(enrapi->nodeCount), RECORDSIZE, 1, enrapi->file);
fread(&(enrapi->tankCount), RECORDSIZE, 1, enrapi->file);
fread(&(enrapi->linkCount), RECORDSIZE, 1, enrapi->file);
fread(&(enrapi->pumpCount), RECORDSIZE, 1, enrapi->file);
// Jump ahead and read flow and pressure units
fseek(enrapi->file, 3*RECORDSIZE, SEEK_CUR);
fread(&(enrapi->flowFlag), RECORDSIZE, 1, enrapi->file);
fread(&(enrapi->pressFlag), RECORDSIZE, 1, enrapi->file);
// Jump ahead and read time information
fseek(enrapi->file, RECORDSIZE, SEEK_CUR);
fread(&(enrapi->reportStart), RECORDSIZE, 1, enrapi->file);
fread(&(enrapi->reportStep), RECORDSIZE, 1, enrapi->file);
fread(&(enrapi->simDuration), RECORDSIZE, 1, enrapi->file);
// Compute positions and offsets for retrieving data
enrapi->outputStartPos = 884;
enrapi->outputStartPos += 32*enrapi->nodeCount + 32*enrapi->linkCount;
enrapi->outputStartPos += 12*enrapi->linkCount+ 8*enrapi->tankCount
+ 4*enrapi->nodeCount + 8*enrapi->linkCount;
enrapi->outputStartPos += 28*enrapi->pumpCount + 4;
enrapi->bytesPerPeriod = 16*enrapi->nodeCount + 32*enrapi->linkCount;
return 0;
}
int DLLEXPORT ENR_getNetSize(ENResultsAPI* enrapi, ENR_ElementCount code, int* count)
//
// Purpose: Returns network size
//
{
*count = -1;
if (enrapi->isOpened) {
switch (code)
{
case ENR_nodeCount: *count = enrapi->nodeCount; break;
case ENR_tankCount: *count = enrapi->tankCount; break;
case ENR_linkCount: *count = enrapi->linkCount; break;
case ENR_pumpCount: *count = enrapi->pumpCount; break;
case ENR_valveCount: *count = enrapi->valveCount; break;
default: return 421;
}
return 0;
}
return 412;
}
int DLLEXPORT ENR_getUnits(ENResultsAPI* enrapi, ENR_Unit code, int* unitFlag)
//
// Purpose: Returns pressure and flow units
//
{
*unitFlag = -1;
if (enrapi->isOpened) {
switch (code)
{
case ENR_flowUnits: *unitFlag = enrapi->flowFlag; break;
case ENR_pressUnits: *unitFlag = enrapi->pressFlag; break;
default: return 421;
}
return 0;
}
return 412;
}
int DLLEXPORT ENR_getTimes(ENResultsAPI* enrapi, ENR_Time code, int* time)
//
// Purpose: Returns report and simulation time related parameters.
//
{
*time = -1;
if (enrapi->isOpened) {
switch (code)
{
case ENR_reportStart: *time = enrapi->reportStart; break;
case ENR_reportStep: *time = enrapi->reportStep; break;
case ENR_simDuration: *time = enrapi->simDuration; break;
case ENR_numPeriods: *time = enrapi->nPeriods; break;
default: return 421;
}
return 0;
}
return 412;
}
float* ENR_newOutValueSeries(ENResultsAPI* enrapi, int seriesStart,
int seriesLength, int* length, int* errcode)
//
// Purpose: Allocates memory for outValue Series.
//
// Warning: Caller must free memory allocated by this function using ENR_free().
//
{
int size;
float* array;
if (enrapi->isOpened) {
size = seriesLength - seriesStart;
if (size > enrapi->nPeriods)
size = enrapi->nPeriods;
// Allocate memory for outValues
array = (float*) calloc(size + 1, sizeof(float));
*errcode = (MEMCHECK(array));
*length = size;
return array;
}
*errcode = 412;
return NULL;
}
float* ENR_newOutValueArray(ENResultsAPI* enrapi, ENR_ApiFunction func,
ENR_ElementType type, int* length, int* errcode)
//
// Purpose: Allocates memory for outValue Array.
//
// Warning: Caller must free memory allocated by this function using ENR_free().
//
{
int size;
float* array;
if (enrapi->isOpened) {
switch (func)
{
case ENR_getAttribute:
if (type == ENR_node)
size = enrapi->nodeCount;
else
size = enrapi->linkCount;
break;
case ENR_getResult:
if (type == ENR_node)
size = NNODERESULTS;
else
size = NLINKRESULTS;
break;
default: *errcode = 421;
return NULL;
}
// Allocate memory for outValues
array = (float*) calloc(size, sizeof(float));
*errcode = (MEMCHECK(array));
*length = size;
return array;
}
*errcode = 412;
return NULL;
}
int DLLEXPORT ENR_getNodeSeries(ENResultsAPI* enrapi, int nodeIndex, ENR_NodeAttribute attr,
int seriesStart, int seriesLength, float* outValueSeries, int* length)
//
// What if timeIndex 0? length 0?
//
// Purpose: Get time series results for particular attribute. Specify series
// start and length using seriesStart and seriesLength respectively.
//
{
int k;
if (enrapi->isOpened) {
// Check memory for outValues
if (outValueSeries == NULL) return 411;
// loop over and build time series
for (k = 0; k <= seriesLength; k++)
outValueSeries[k] = getNodeValue(enrapi, seriesStart + 1 + k,
nodeIndex, attr);
return 0;
}
// Error no results to report on binary file not opened
return 412;
}
int DLLEXPORT ENR_getLinkSeries(ENResultsAPI* enrapi, int linkIndex, ENR_LinkAttribute attr,
int seriesStart, int seriesLength, float* outValueSeries)
//
// What if timeIndex 0? length 0?
//
// Purpose: Get time series results for particular attribute. Specify series
// start and length using seriesStart and seriesLength respectively.
//
{
int k;
if (enrapi->isOpened) {
// Check memory for outValues
if (outValueSeries == NULL) return 411;
// loop over and build time series
for (k = 0; k <= seriesLength; k++)
outValueSeries[k] = getLinkValue(enrapi, seriesStart +1 + k,
linkIndex, attr);
return 0;
}
// Error no results to report on binary file not opened
return 412;
}
int DLLEXPORT ENR_getNodeAttribute(ENResultsAPI* enrapi, int timeIndex,
ENR_NodeAttribute attr, float* outValueArray)
//
// Purpose: For all nodes at given time, get a particular attribute
//
{
INT4 offset;
if (enrapi->isOpened) {
// Check memory for outValues
if (outValueArray == NULL) return 411;
// calculate byte offset to start time for series
offset = enrapi->outputStartPos + (timeIndex)*enrapi->bytesPerPeriod;
// add offset for node and attribute
offset += (attr*enrapi->nodeCount)*RECORDSIZE;
fseek(enrapi->file, offset, SEEK_SET);
fread(outValueArray, RECORDSIZE, enrapi->nodeCount, enrapi->file);
return 0;
}
// Error no results to report on binary file not opened
return 412;
}
int DLLEXPORT ENR_getLinkAttribute(ENResultsAPI* enrapi, int timeIndex,
ENR_LinkAttribute attr, float* outValueArray)
//
// Purpose: For all links at given time, get a particular attribute
//
{
INT4 offset;
if (enrapi->isOpened) {
// Check memory for outValues
if (outValueArray == NULL) return 411;
// calculate byte offset to start time for series
offset = enrapi->outputStartPos + (timeIndex)*enrapi->bytesPerPeriod
+ (NNODERESULTS*enrapi->nodeCount)*RECORDSIZE;
// add offset for link and attribute
offset += (attr*enrapi->linkCount)*RECORDSIZE;
fseek(enrapi->file, offset, SEEK_SET);
fread(outValueArray, RECORDSIZE, enrapi->linkCount, enrapi->file);
return 0;
}
// Error no results to report on binary file not opened
return 412;
}
int DLLEXPORT ENR_getNodeResult(ENResultsAPI* enrapi, int timeIndex, int nodeIndex,
float* outValueArray)
//
// Purpose: For a node at given time, get all attributes
//
{
int j;
if (enrapi->isOpened) {
// Check memory for outValues
if (outValueArray == NULL) return 411;
for (j = 0; j < NNODERESULTS; j++)
outValueArray[j] = getNodeValue(enrapi, timeIndex + 1, nodeIndex, j);
return 0;
}
// Error no results to report on binary file not opened
return 412;
}
int DLLEXPORT ENR_getLinkResult(ENResultsAPI* enrapi, int timeIndex, int linkIndex,
float* outValueArray)
//
// Purpose: For a link at given time, get all attributes
//
{
int j;
if (enrapi->isOpened) {
// Check memory for outValues
if (outValueArray == NULL) return 411;
for (j = 0; j < NLINKRESULTS; j++)
outValueArray[j] = getLinkValue(enrapi, timeIndex + 1, linkIndex, j);
return 0;
}
// Error no results to report on binary file not opened
return 412;
}
int DLLEXPORT ENR_free(float* array)
//
// Purpose: frees memory allocated using ENR_newOutValueSeries() or
// ENR_newOutValueArray()
//
{
if (array != NULL)
free(array);
return 0;
}
int DLLEXPORT ENR_close(ENResultsAPI* enrapi)
//
// Purpose: Clean up after and close Output API
//
{
if (enrapi->isOpened) {
fclose(enrapi->file);
free(enrapi);
}
// Error binary file not opened
else return 412;
return 0;
}
int DLLEXPORT ENR_errMessage(int errcode, char* errmsg, int n)
//
// Purpose: takes error code returns error message
//
// Input Error 411: no memory allocated for results
// Input Error 412: no results binary file hasn't been opened
// Input Error 421: invalid parameter code
// File Error 434: unable to open binary output file
// File Error 435: run terminated no results in binary file
{
switch (errcode)
{
case 411: strncpy(errmsg, ERR411, n); break;
case 412: strncpy(errmsg, ERR412, n); break;
case 421: strncpy(errmsg, ERR421, n); break;
case 434: strncpy(errmsg, ERR434, n); break;
case 435: strncpy(errmsg, ERR435, n); break;
default: return 421;
}
return 0;
}
float getNodeValue(ENResultsAPI* enrapi, int timeIndex, int nodeIndex,
ENR_NodeAttribute attr)
//
// Purpose: Retrieves an attribute value at a specified node and time
//
{
REAL4 y;
INT4 offset;
// calculate byte offset to start time for series
offset = enrapi->outputStartPos + (timeIndex - 1)*enrapi->bytesPerPeriod;
// add bytepos for node and attribute
offset += (nodeIndex + attr*enrapi->nodeCount)*RECORDSIZE;
fseek(enrapi->file, offset, SEEK_SET);
fread(&y, RECORDSIZE, 1, enrapi->file);
return y;
}
float getLinkValue(ENResultsAPI* enrapi, int timeIndex, int linkIndex,
ENR_LinkAttribute attr)
//
// Purpose: Retrieves an attribute value at a specified link and time
//
{
REAL4 y;
INT4 offset;
// Calculate byte offset to start time for series
offset = enrapi->outputStartPos + (timeIndex - 1)*enrapi->bytesPerPeriod
+ (NNODERESULTS*enrapi->nodeCount)*RECORDSIZE;
// add bytepos for link and attribute
offset += (linkIndex + attr*enrapi->linkCount)*RECORDSIZE;
fseek(enrapi->file, offset, SEEK_SET);
fread(&y, RECORDSIZE, 1, enrapi->file);
return y;
}

View File

@@ -1,122 +0,0 @@
/*
* outputapi.h
*
* Created on: Jun 4, 2014
* Author: mtryby
*/
#ifndef OUTPUTAPI_H_
#define OUTPUTAPI_H_
#define MAXFNAME 259
/*------------------- Error Messages --------------------*/
#define ERR411 "Input Error 411: no memory allocated for results."
#define ERR412 "Input Error 412: no results; binary file hasn't been opened."
#define ERR421 "Input Error 421: invalid parameter code."
#define ERR434 "File Error 434: unable to open binary output file."
#define ERR435 "File Error 435: run terminated; no results in binary file."
/* Epanet Results binary file API */
typedef struct ENResultsAPI ENResultsAPI; // opaque struct object
typedef enum {
ENR_node = 1,
ENR_link = 2
} ENR_ElementType;
typedef enum {
ENR_getSeries = 1,
ENR_getAttribute = 2,
ENR_getResult = 3
} ENR_ApiFunction;
typedef enum {
ENR_nodeCount = 1,
ENR_tankCount = 2,
ENR_linkCount = 3,
ENR_pumpCount = 4,
ENR_valveCount = 5
} ENR_ElementCount;
typedef enum {
ENR_flowUnits = 1,
ENR_pressUnits = 2
} ENR_Unit;
typedef enum {
ENR_reportStart = 1,
ENR_reportStep = 2,
ENR_simDuration = 3,
ENR_numPeriods = 4
}ENR_Time;
typedef enum {
ENR_demand = 0,
ENR_head = 1,
ENR_pressure = 2,
ENR_quality = 3
} ENR_NodeAttribute;
typedef enum {
ENR_flow = 0,
ENR_velocity = 1,
ENR_headloss = 2,
ENR_avgQuality = 3,
ENR_status = 4,
ENR_setting = 5,
ENR_rxRate = 6,
ENT_frctnFctr = 7
} ENR_LinkAttribute;
#ifdef WINDOWS
#ifdef __cplusplus
#define DLLEXPORT extern "C" __declspec(dllexport) __stdcall
#else
#define DLLEXPORT __declspec(dllexport) __stdcall
#endif
#else
#ifdef __cplusplus
#define DLLEXPORT extern "C"
#else
#define DLLEXPORT
#endif
#endif
ENResultsAPI* DLLEXPORT ENR_alloc(void);
int DLLEXPORT ENR_open(ENResultsAPI* enrapi, const char* path);
int DLLEXPORT ENR_getNetSize(ENResultsAPI* enrapi, ENR_ElementCount code, int* count);
int DLLEXPORT ENR_getUnits(ENResultsAPI* enrapi, ENR_Unit code, int* unitFlag);
float* ENR_newOutValueSeries(ENResultsAPI* enrapi, int seriesStart,
int seriesLength, int* length, int* errcode);
float* ENR_newOutValueArray(ENResultsAPI* enrapi, ENR_ApiFunction func,
ENR_ElementType type, int* length, int* errcode);
int DLLEXPORT ENR_getNodeSeries(ENResultsAPI* enrapi, int nodeIndex, ENR_NodeAttribute attr,
int timeIndex, int length, float* outValueSeries, int* len);
int DLLEXPORT ENR_getLinkSeries(ENResultsAPI* enrapi, int linkIndex, ENR_LinkAttribute attr,
int timeIndex, int length, float* outValueSeries);
int DLLEXPORT ENR_getNodeAttribute(ENResultsAPI* enrapi, int timeIndex,
ENR_NodeAttribute attr, float* outValueArray);
int DLLEXPORT ENT_getLinkAttribute(ENResultsAPI* enrapi, int timeIndex,
ENR_LinkAttribute attr, float* outValueArray);
int DLLEXPORT ENR_getNodeResult(ENResultsAPI* enrapi, int timeIndex, int nodeIndex,
float* outValueArray);
int DLLEXPORT ENR_getLinkResult(ENResultsAPI* enrapi, int timeIndex, int linkIndex,
float* outValueArray);
int DLLEXPORT ENR_free(float *array);
int DLLEXPORT ENR_close(ENResultsAPI* enrapi);
int DLLEXPORT ENR_errMessage(int errcode, char* errmsg, int n);
#endif /* OUTPUTAPI_H_ */

View File

@@ -0,0 +1,19 @@
#
# requirements-appveyor.txt
#
# Date Created: 10/10/2017
# Author: Michael E. Tryby
# US EPA ORD/NRMRL
#
# Useful for configuring a python environment to run epanet-nrtestsuite
# on AppVeyor CI.
#
# command:
# $ pip install --src build/packages -r tools/requirements-appveyor.txt
#
nrtest>=0.2.3
-f https://github.com/OpenWaterAnalytics/epanet-python/releases/download/v0.1.0-alpha/epanet_output-0.1.0a0-cp27-cp27m-win32.whl
-e ./tools/nrtest-epanet

16
tools/requirements.txt Normal file
View File

@@ -0,0 +1,16 @@
#
# requirements.txt
#
# Date Created: 10/10/2017
# Author: Michael E. Tryby
# US EPA ORD/NRMRL
#
# Useful for configuring a python environment to run epanet-nrtestsuite.
#
# command:
# $ pip install --src build/packages -r tools/requirements.txt
#
nrtest>=0.2.3
#-e ./tools/epanet-output
#-e ./tools/nrtest-epanet

64
tools/run-nrtest.cmd Normal file
View File

@@ -0,0 +1,64 @@
::
:: run_nrtest.cmd - Runs numerical regression test
::
:: Date Created: 1/8/2018
::
:: Author: Michael E. Tryby
:: US EPA - ORD/NRMRL
::
:: Arguments:
:: 1 - (REF build identifier)
:: 2 - (SUT build identifier)
:: 3 - (test suite path)
::
@echo off
setlocal
:: Check existence and apply default arguments
IF [%1]==[] ( echo "ERROR: REF_BUILD_ID must be defined" & exit /B 1
) ELSE ( set "REF_BUILD_ID=%~1" )
IF [%2]==[] ( set "SUT_BUILD_ID=local"
) ELSE ( set "SUT_BUILD_ID=%~2" )
IF [%3]==[] ( set "TEST_SUITE_PATH=nrtestsuite"
) ELSE ( set "TEST_SUITE_PATH=%~3" )
:: determine location of python Scripts folder
FOR /F "tokens=*" %%G IN ('where python') DO (
set PYTHON_DIR=%%~dpG
)
set "NRTEST_SCRIPT_PATH=%PYTHON_DIR%Scripts"
set NRTEST_EXECUTE_CMD=python %NRTEST_SCRIPT_PATH%\nrtest execute
set TEST_APP_PATH=apps\epanet-%SUT_BUILD_ID%.json
set TESTS=tests\examples tests\exeter tests\large tests\network_one tests\press_depend tests\small tests\tanks tests\valves
set TEST_OUTPUT_PATH=benchmark\epanet-%SUT_BUILD_ID%
set NRTEST_COMPARE_CMD=python %NRTEST_SCRIPT_PATH%\nrtest compare
set REF_OUTPUT_PATH=benchmark\epanet-%REF_BUILD_ID%
set RTOL_VALUE=0.01
set ATOL_VALUE=0.0
:: change current directory to test suite
cd %TEST_SUITE_PATH%
:: if present clean test benchmark results
if exist %TEST_OUTPUT_PATH% (
rmdir /s /q %TEST_OUTPUT_PATH%
)
echo INFO: Creating SUT %SUT_BUILD_ID% artifacts
set NRTEST_COMMAND=%NRTEST_EXECUTE_CMD% %TEST_APP_PATH% %TESTS% -o %TEST_OUTPUT_PATH%
:: if there is an error exit the script with error value 1
%NRTEST_COMMAND% || exit /B 1
echo.
echo INFO: Comparing SUT artifacts to REF %REF_BUILD_ID%
set NRTEST_COMMAND=%NRTEST_COMPARE_CMD% %TEST_OUTPUT_PATH% %REF_OUTPUT_PATH% --rtol %RTOL_VALUE% --atol %ATOL_VALUE% -o benchmark\receipt.json
%NRTEST_COMMAND%

98
tools/run-nrtest.sh Executable file
View File

@@ -0,0 +1,98 @@
#! /bin/bash
#
# run-nrtest.sh - Runs numerical regression test
#
# Date Created: 10/16/2017
#
# Author: Michael E. Tryby
# US EPA - ORD/NRMRL
#
# Arguments:
# 1 - REF build identifier
# 2 - SUT build identifier
# 3 - relative path to location there test suite is staged
#
run-nrtest()
{
return_value=0
test_suite_path=$4
nrtest_execute_cmd="nrtest execute"
sut_app_path="apps/epanet-$3.json"
tests="tests/examples tests/exeter tests/large tests/network_one tests/small tests/tanks tests/valves"
sut_output_path="benchmark/epanet-$3"
nrtest_compare_cmd="nrtest compare"
ref_output_path="benchmark/epanet-$2"
rtol_value=0.1
atol_value=0.0
# change current directory to test_suite
cd ${test_suite_path}
# clean test benchmark results
rm -rf ${test_output_path}
echo INFO: Creating test benchmark
nrtest_command="${nrtest_execute_cmd} ${sut_app_path} ${tests} -o ${sut_output_path}"
echo INFO: "$nrtest_command"
return_value=$( $nrtest_command )
if [ $1 = 'true' ]; then
echo
echo INFO: Comparing test and ref benchmarks
nrtest_command="${nrtest_compare_cmd} ${sut_output_path} ${ref_output_path} --rtol ${rtol_value} --atol ${atol_value} --output benchmark\receipt.json"
echo INFO: "$nrtest_command"
return_value=$( $nrtest_command )
fi
return $return_value
}
print_usage() {
echo " "
echo "run-nrtest.sh - generates artifacts for SUT and performes benchmark comparison "
echo " "
echo "options:"
echo "-c don't compare SUT and REF artifacts"
echo "-r ref_build id REF build identifier"
echo "-s sut build id SUT build identifier"
echo "-t test_path relative path to location where test suite is staged"
echo " "
}
# Default option values
compare='true'
ref_build_id='unknown'
sut_build_id='local'
test_path='nrtestsuite'
while getopts "cr:s:t:" flag; do
case "${flag}" in
c ) compare='false' ;;
r ) ref_build_id=${OPTARG} ;;
s ) sut_build_id=${OPTARG} ;;
t ) test_path="${OPTARG}" ;;
\? ) print_usage
exit 1 ;;
esac
done
shift $(($OPTIND - 1))
# determine ref_build_id from manifest file
if [[ $ref_build_id == 'unknown' ]] && [[ $compare == 'true' ]]; then
description=(`cat ${test_path}/manifest.json | jq '.Application.description | splits(" ")'`)
ref_build_id=${description[1]//\"/}
fi
# Invoke command
run_command="run-nrtest ${compare} ${ref_build_id} ${sut_build_id} ${test_path}"
echo INFO: "$run_command"
$run_command

43
tools/test-config.sh Normal file
View File

@@ -0,0 +1,43 @@
#! /bin/bash
#
# test-config.sh - Generates nrtest test configuration file for test case.
#
# Date Created: 3/19/2018
#
# Author: Michael E. Tryby
# US EPA - ORD/NRMRL
#
# Arguments:
# 1 - name
# 2 - version
# 3 - description
#
# Suggested Usage:
# $ for file in .//*; do ./test-config.sh $file 1.0 > "${file%.*}.json"; done
#
filename="$1"
name="${filename%.*}"
version="$2"
description="$3"
cat<<EOF
{
"name": "${name}",
"version": "${version}",
"description": "${description}",
"args": [
"${name}.inp",
"${name}.rpt",
"${name}.out"
],
"input_files": [
"${name}.inp"
],
"output_files": {
"${name}.rpt": "epanet report",
"${name}.out": "epanet allclose"
}
}
EOF

View File

@@ -1,6 +1,6 @@
#!/bin/sh
##
##
## This script will auto-generate the AUTHORS attribution file.
## If your name does not display correctly, then please
## update the .mailmap file in the root repo directory
@@ -25,7 +25,12 @@ END {
print "# Authors ordered by first contribution.\n";
print "# Generated by tools/update-authors.sh\n";
print "\n", @authors;
print "\n*** some works are in the public domain, all others licensed under terms: see LICENSE";
}
' > ../AUTHORS
'
echo "\n\nSome commits are co-authored:\n"
git log | grep Co-Author