1
0
Fork 0
mirror of https://github.com/sensebox/opensensmapr synced 2026-04-09 19:15:26 +02:00

Compare commits

..

150 commits

Author SHA1 Message Date
jan
8ef52f8f59 update inst/doc 2023-03-10 10:32:36 +01:00
jan
8d0746b263 changed T to TRUE 2023-03-06 17:07:18 +01:00
jan
fc2ee05f77 updated NEWS 2023-03-06 16:56:11 +01:00
jan
e3099ca35c updated NEWS 2023-03-06 16:55:57 +01:00
jan
19351bd487 dont install packages within functions or vignette 2023-03-06 16:44:19 +01:00
jan
1732084856 ensure archive resource is available 2023-03-06 16:42:51 +01:00
jan
334a49a309 reset par on exit 2023-03-06 11:17:33 +01:00
jan
95f4f889da adding return 2023-03-06 11:04:40 +01:00
jan
ddf911e6a7 updated license 2023-03-06 10:19:58 +01:00
jan
6ebfc7f50a changed T and F to TRUE and FALSE 2023-03-06 10:19:37 +01:00
jan
0e1d9e3cad not planned to be committed before.. 2023-03-01 16:38:01 +01:00
jan
9911226a76 dontrun test with sf package 2023-03-01 16:36:51 +01:00
jan
97475dbbff using precomputed data for vignettes 2023-03-01 16:35:27 +01:00
jan
eec6f84806 use cached data to speed up building of vignettes 2023-02-28 16:18:36 +01:00
jan
86e80d52c9 automatically generated documentation has changed 2023-02-27 16:55:22 +01:00
jan
86efd52bf2 optimized runtime for tests 2023-02-27 16:54:16 +01:00
jan
60445d70c5 codecov disabled 2023-02-27 14:31:06 +01:00
jan
b1001b174e improved runtime to be affordable on CRAN submission 2023-02-24 12:11:32 +01:00
jan
e2e9e3dbb3 automatically generated manual updates 2023-02-24 12:10:20 +01:00
jan
7e8eb46c8e removed URIs invalid for CRAN 2023-02-23 17:21:56 +01:00
jan
262141751f ignore files not allowed at top level 2023-02-23 16:14:31 +01:00
jan
a22c46ba14 update links and ignores 2023-02-23 15:18:10 +01:00
jan
37d4dde1d6 add inst/doc 2023-02-23 15:12:46 +01:00
jan
62667ef139 remove inst/doc files 2023-02-23 14:07:21 +01:00
jan
b26ca150a9 NEWS does not need to be ignored 2023-02-23 14:06:08 +01:00
jan
d919f89082 spellcheck 2023-02-23 11:28:27 +01:00
jan
64db38c291 change CHANGES to NEWS 2023-02-22 12:25:19 +01:00
jan
e4216b3572 ignore cran-comments 2023-02-22 12:24:47 +01:00
jan
a4b878fc8f change donttest to dontrun to avoid unwanted files 2023-02-22 12:24:28 +01:00
jan
7e1f42b8b9 dont run cache test 2023-02-20 14:46:36 +01:00
jan
ebb9f5bd1f add v0.6.0 news and rename from CHANGES 2023-02-20 12:50:31 +01:00
jan
35c9e84302 updated links 2023-02-09 14:26:53 +01:00
jan
c8925df68d updated manual files 2023-02-09 14:26:35 +01:00
jan
4e1b5d6389 update hyperlinks 2023-02-08 12:00:45 +01:00
jan
8393048957 version update 2023-02-08 12:00:08 +01:00
jan
34a5dfae19 load opensensemapr only from package 2023-02-07 15:39:54 +01:00
jan
438eda09cd change format for empty responses of archive measurements 2023-02-07 15:38:21 +01:00
jan
36f4701557 add required library to donttest 2023-02-07 15:37:42 +01:00
jan
0292779ca5 as.tibble deprectated -> change to as_tibble 2023-02-02 17:08:40 +01:00
jan
4a81cab11c create dir for clear_cache test but dont run 2023-02-02 11:09:30 +01:00
jan
4b24aa9582 updated manual 2023-02-01 12:47:36 +01:00
jan
35c3014dee fixing && error 2023-02-01 12:47:05 +01:00
jan
35d9ee697b history Rmd revised 2023-02-01 09:37:38 +01:00
jan
ca7c32ee64 change donttest to dontrun to avoid test errors 2023-02-01 09:31:20 +01:00
jan
6d5c821d3a osem history revised (2022) 2023-01-31 18:04:25 +01:00
jan
b4ee1b9ff6 update authors 2023-01-31 17:21:01 +01:00
jan
fda76ed670 add some brackets 2023-01-31 17:14:22 +01:00
jan
0529dd9a29 fix sensorType for empty ones 2023-01-31 17:13:41 +01:00
jan
379b38046d add grouptag 2023-01-31 17:12:59 +01:00
jan
24851046f2 add bbox 2023-01-31 17:12:03 +01:00
jan
e4825ca14c test bbox 2023-01-31 17:08:47 +01:00
jan
92672ae74c new tibble syntax 2023-01-31 17:07:30 +01:00
Norwin
f69cf62b27
fail gracefully when api is not available (#27)
more robust error handling when API not available (#26)
2019-05-16 12:35:52 +02:00
b69e5dc57f v0.5.1 2019-02-09 22:49:19 +01:00
4b01bbbee1 test box field parsing 2019-02-09 22:46:17 +01:00
9ddc077bfd cleanup date parsing 2019-02-09 22:10:25 +01:00
c618907853 compatibility with API version v6 2019-02-09 22:03:33 +01:00
3e56cd1a0e Update Code of Conduct 💣
This refinement of the CoC is based on my experiences from other
projects. A series of edits might follow.
As it has a low contributor count, I use this project as a playground to
develop a good CoC which does not focus on identity politics in order to
be a useful tool even with a hate mob pushing in.

My issues with the contributor covenant:

- I don't like its authoritarian tone, I want something more welcoming,
  improvement oriented. That'll be addressed next.

- Listing minorities explicitly is en vogue, but does not help to define
  acceptable behavior. Looking at prominent projects, this property has
  actually been gamed by harassers to say "but check my privilege, i am
  $x myself!", which in turn ended up with a non-privilege competition,
  generating more toxic waste.
  Why don't we just say *there is no acceptable reason for harassment*?

- The list of examples again tries to be a deny-list, where it is
  actually just a list of... examples. Let's simplify that list, as it
  mostly contains duplicates anyway.
  Every explicitly mentioned action provides additional attack surface
  for straw-man discussions.
2018-10-23 15:25:34 +02:00
8936ff270c make R CMD check happy 🔪 2018-10-20 19:04:56 +02:00
e853430c8e fix appveyor badge 2018-10-20 11:47:10 +02:00
e37f572a94 v0.5.0 2018-10-20 11:33:27 +02:00
32d0cceb28 fix lint 2018-10-20 11:33:27 +02:00
92cbbcbfc7 fix tests 2018-10-20 11:33:27 +02:00
ee491673fa fix issue with missing lastMeasurement 2018-10-20 11:33:27 +02:00
ddc6289ce3 improve documentation, rebuild docs 2018-10-20 01:57:18 +02:00
de3a05bf97 Merge branch 'measurements_archive' into development 2018-10-20 01:50:14 +02:00
8d515a5fb0 build performance improvements 2018-10-20 01:49:55 +02:00
18a698b375 add some tests for osem_measurements_archive 2018-10-20 01:48:33 +02:00
4d33fa9029 document osem_measurements_archive 2018-10-20 01:25:45 +02:00
80dc58a298 move methods for external generics into one place 2018-10-19 23:41:56 +02:00
c89cd274a5 fix delayed method loading
regression from 6a42357ec3
2018-10-19 23:35:19 +02:00
abcfbf5910 fix archive 404 handling 2018-10-18 17:07:20 +02:00
33a9c42e54 add osem_measurement_archive()
TODO: tests, documentation
2018-10-18 16:31:30 +02:00
c4da876761 v0.4.3 2018-09-21 12:21:22 +02:00
6a42357ec3 dynamically load S3 methods from foreign packages
namely dplyr::mutate, dplyr::filter, sf::st_as_sf

this is due to changes in the upcoming R release 3.6.0.
the approach taken is copied from the sf package.
2018-09-21 12:06:55 +02:00
aa453d6afe add readr as hard dependency
was only in 'Suggests' before, but is actually needed for essential
functionality (osem_measurements())
2018-09-21 11:58:09 +02:00
1976e07cec v0.4.2 2018-09-05 10:28:17 +02:00
bdc72e94e1 change references to sensebox org 2018-09-05 10:27:55 +02:00
f30bc9c185 pass on ... in plot.sensebox()
fixes #24
2018-09-04 11:32:54 +02:00
noerw
925909ebe8 finally workaround / fix #22 2018-06-27 18:31:31 +02:00
noerw
93b4f6fe52 clarify README 2018-06-26 23:57:39 +02:00
noerw
f53eeb015c attempt to build vignettes on travis 2018-06-11 19:32:45 +02:00
noerw
12ffb14b45 attempt to build vignettes on travis 2018-06-07 07:52:15 +02:00
noerw
28e767586e attempt to build vignettes on travis 2018-06-07 00:20:20 +02:00
noerw
4dac0a4c04 v0.4.1 2018-06-07 00:20:20 +02:00
noerw
f7cbb1bc26 update vignette to workaround #22 ...again 2018-06-07 00:20:20 +02:00
noerw
994f08ab94 fix osem_as_measurements 2018-06-07 00:20:20 +02:00
noerw
97768e7cdb clean up osem-serialization, #22 2018-06-05 20:22:22 +02:00
noerw
54b0994671
v0.4.0 2018-05-26 12:52:30 +02:00
38008b1e6c
add/update vignette builds 2018-05-26 12:52:30 +02:00
noerw
60ae1d5358
make R CMD check happy 2018-05-26 12:52:30 +02:00
noerw
553772d209 update docs + vignette builds 2018-05-25 17:17:47 +02:00
noerw
e49ae4bb50 update osem-history, add vignette deps to DESCRIPTION 2018-05-25 01:37:06 +02:00
noerw
1966c305bc add more examples, fix missing parameter check 2018-05-25 01:35:57 +02:00
noerw
dd6d8c8539 add caching feature 2018-05-25 01:34:47 +02:00
noerw
4f95ae19a8 hello CRAN :^|
i'm all in for quality control, but some of these checks are just ridiculous..
rejecting a package b/c of typos auto-checked against an aspell dict - in a text
read by ~1% of users?? and the only workaround (whitelisting words) is not
documented, and TOTALLY overengineered?!?
going for the tech elitism awards, huh
/rant

but hey, we're on board of that train now! :}
2018-05-14 00:56:10 +02:00
noerw
a7462ba1e1 add vignette osem-history 2018-05-14 00:56:10 +02:00
noerw
b79f3dff8b add vignette osem-serialization
#13
2018-05-10 22:28:37 +02:00
noerw
8975cbc664 fix print.sensebox, add more tests 2018-05-10 22:28:37 +02:00
noerw
7ea3ddca01 speed up travis build by skipping vignettes 2018-05-07 18:13:41 +02:00
noerw
8999284d62 last touch before CRAN submission 2018-05-07 18:13:16 +02:00
noerw
22bf5e26a7 increase test coverage, fixes #8 2018-05-07 01:46:34 +02:00
noerw
e19b040eb2 fix build warnings 2018-05-07 01:32:31 +02:00
noerw
364f612216 add lintr config, make code lint compliant, fixes #20 2018-05-07 01:32:11 +02:00
noerw
6a95171bd3 fix [.osem_measurements
regression from 6d2f6f6
2018-01-18 17:38:35 +01:00
faa58d4bb8
fixup! make examples not spam test logs 2018-01-15 17:42:37 +01:00
Norwin
12f37006bd
Merge pull request #17 from nuest/master
Add tests and CI integrations, add CoC
2018-01-15 14:56:29 +00:00
7e7b9bcb12
skip API dependant tests on CRAN 2018-01-15 15:27:42 +01:00
6d2f6f67e1
add some tests, fix failing tests 2018-01-15 15:27:42 +01:00
909e4de36d
lint 2018-01-15 15:27:42 +01:00
e7203cea81
make examples not spam test logs 2018-01-15 15:27:42 +01:00
nuest
d82b538b7a increase test coverage 2018-01-15 13:11:59 +01:00
nuest
5f6dc7c0b5 fix docs 2018-01-15 12:18:19 +01:00
nuest
1d6db7ae21 add measurement tests and skip failing box test 2018-01-15 12:08:59 +01:00
nuest
bc71b9c0de minor improvements
and re-generate docs
2018-01-15 12:08:43 +01:00
nuest
1f591b311e bump bugfix version (no user-side changes) 2018-01-15 09:23:59 +01:00
nuest
8fdec0c13f use more powerful Authors@R in DESCRIPTION 2018-01-15 09:23:29 +01:00
nuest
3f7ef00caa minor reformatting of README 2018-01-14 22:29:56 +01:00
nuest
b852f6aac3 add coc 2018-01-14 22:14:32 +01:00
nuest
9a81816922 add first test for measurements 2018-01-14 22:14:12 +01:00
nuest
8e1fb7ad10 add appveyor config 2018-01-14 22:13:45 +01:00
nuest
c18d74a210 add codecov configuration 2018-01-14 21:50:10 +01:00
nuest
542f2205eb add tests for phenomena and box 2018-01-14 21:49:51 +01:00
nuest
1712976770 fix travis tests 2018-01-14 21:49:24 +01:00
nuest
ef3fb7f4bb add function wrapper for default endpoint 2018-01-14 21:48:59 +01:00
nuest
d3d758d554 add tests and Travis CI configuration 2018-01-14 18:40:22 +01:00
b9ff952489
v0.3.2 2018-01-13 16:19:50 +01:00
07ab9b3096
fix summary.sensebox() last_measurement_within 2018-01-13 16:17:59 +01:00
a8a1effa48
make R CMD check --as-cran pass
fixes #9

also update examples & shorten their runtime a bit
2018-01-13 15:17:23 +01:00
44d9026936
document osem_as_sensebox() (#13) 2018-01-13 15:16:11 +01:00
2db7d87a92
remove deprecated NSE functions from dplyr 2018-01-13 15:13:39 +01:00
cd6f3c6fbb
add plot.sensebox() dependencies to suggests
fixes #15
2018-01-13 15:08:31 +01:00
416d986c11
expose mar for plot.osem_measurements(), fixes #12 2018-01-05 00:13:53 +01:00
8f70a242d0
expose mar for plot.sensebox(), fixes #12 2018-01-05 00:04:30 +01:00
84dfc3226a
fix printing of measurements 2018-01-05 00:03:45 +01:00
628825c7f4
no download progress for non-interactive sessions
also add option to disable progress info manually to
- osem_measurements()
- osem_boxes()

fixes #11
2018-01-05 00:03:04 +01:00
185f668ca8
too many es are not healthy 2017-12-01 20:54:16 +01:00
a65cb9a280
osem_measurements: use factors for paged reqs 2017-11-30 17:54:00 +01:00
dc7b86d391
remove note about #6
fixes #6, was fixed upstream
2017-11-30 17:54:00 +01:00
9444ea1702
fix docs 2017-11-30 17:54:00 +01:00
dcc5498267
v0.3.1 2017-11-30 01:09:14 +01:00
fa22acb40e
add package documentation
fixes #5
2017-11-30 01:09:10 +01:00
182ece1fa5
adapt to new API format
fixes #4
2017-11-29 23:22:02 +01:00
d5241e4817
add .gitattributes
should fix false language classification, improving repo discoverability
2017-11-29 14:21:04 +01:00
noerw
27026772dc v0.3.0 2017-09-04 23:47:30 +02:00
noerw
8b4ec6295d update vignette build 2017-08-25 11:37:03 +02:00
noerw
2b8762d52c rename osem_as_sf to st_as_sf.sensebox
for consistency and better integration with sf
2017-08-25 11:37:03 +02:00
noerw
2092976f86 add outlier filtering in vignette 2017-08-25 11:36:58 +02:00
noerw
ef2baa6559 add S3 utility functions
attach the class to an object:
- osem_as_sensebox
- osem_as_measurements

retain the class after filtering / mutating / subsetting:
- filter.sensebox
- filter.osem_measurements
- mutate.sensebox
- mutate.osem_measurements
- `[.sensebox`
- `[.osem_measurements`
2017-08-25 11:36:58 +02:00
noerw
c216ec9f61 remove note about #3, fix typos 2017-08-24 19:17:07 +02:00
87 changed files with 9802 additions and 1037 deletions

View file

@ -1,3 +1,11 @@
^.*\.Rproj$ ^.*\.Rproj$
^\.Rproj\.user$ ^\.Rproj\.user$
^tools/.*$ ^tools*$
^\.travis\.yml$
^appveyor\.yml$
^CONDUCT\.md$
^codecov\.yml$
^\.lintr$
^opensensmapr_.*\.tar\.gz$
^cran-comments\.md$
^CRAN-SUBMISSION$

4
.aspell/defaults.R Normal file
View file

@ -0,0 +1,4 @@
Rd_files <- vignettes <- R_files <- description <-
list(encoding = "UTF-8",
language = "en",
dictionaries = c("en_stats", "opensensmapr"))

BIN
.aspell/opensensmapr.rds Normal file

Binary file not shown.

2
.gitattributes vendored Normal file
View file

@ -0,0 +1,2 @@
inst/* linguist-documentation
man/* linguist-documentation

3
.gitignore vendored
View file

@ -3,5 +3,8 @@
.Rhistory .Rhistory
.RData .RData
.Ruserdata .Ruserdata
*.Rcheck
*.log *.log
cran-comments.md
opensensmapr_*.tar.gz

14
.lintr Normal file
View file

@ -0,0 +1,14 @@
exclusions: list.files(path = 'inst/doc', full.names = TRUE)
linters: with_defaults(
# we use snake case
camel_case_linter = NULL,
# '=' for assignment is fine :^)
assignment_linter = NULL,
# single quotes are fine
single_quotes_linter = NULL,
# nobody reads code on a vt100 anymore
line_length_linter(120),
# this one throws lots of false positives, dial down the noise
object_usage_linter = NULL,
NULL
)

39
.travis.yml Normal file
View file

@ -0,0 +1,39 @@
# R for travis: see documentation at https://docs.travis-ci.com/user/languages/r
language: R
sudo: false
cache: packages
warnings_are_errors: true
r_github_packages:
- r-lib/covr
- jimhester/lintr
before_install:
- sudo add-apt-repository ppa:ubuntugis/ubuntugis-unstable --yes
- sudo apt-get --yes --force-yes update -qq
# units/udunits2 dependency:
- sudo apt-get install --yes libudunits2-dev
# sf dependencies:
- sudo apt-get install --yes libproj-dev libgeos-dev libgdal-dev
after_success:
- Rscript -e 'covr::codecov()'
- Rscript -e 'lintr::lint_package()'
matrix:
include:
# fast build
- r: devel
r_build_args: "--no-build-vignettes"
r_check_args: "--no-vignettes --no-manual"
env: NOT_CRAN=true
# strict builds
- r: devel
r_check_args: "--as-cran"
env: NOT_CRAN=false
- r: release
r_check_args: "--as-cran"
env: NOT_CRAN=false

View file

@ -1,18 +0,0 @@
# opensensmapr changelog
### 2017-08-24: v0.2.1
- add labels to `osem_measurements` plots
- add last active counts to `tools/monitor`
#### fixes
- fix regression from #2 for requests without from/to
### 2017-08-23: v0.2.0
- add auto paging for `osem_measurements()`, allowing data retrieval for arbitrary time intervals (#2)
- improve plots for `osem_measurements` & `sensebox` (#1)
- add `sensorId` & `unit` colummn to `get_measurements()` output by default
- show download progress info, hide readr output
- shorten vignette `osem-intro`
#### breaking changes
- return all string columns of `get_measurements()` as factors

20
CONDUCT.md Normal file
View file

@ -0,0 +1,20 @@
# Contributor Code of Conduct
As contributors and maintainers of this project, we pledge to respect all people who
contribute through any means.
We are committed to making participation in this project a harassment-free experience for
everyone, regardless of their level of experience and personal or cultural traits.
Examples of unacceptable behavior by participants include derogatory comments,
personal attacks, and trolling, both in public or private.
Project maintainers have the right and responsibility to remove, edit, or reject any
contributions that are not aligned to this Code of Conduct. Project maintainers who
do not follow the Code of Conduct may be removed from the project team.
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by
opening an issue or contacting one or more of the project maintainers.
This Code of Conduct is adapted from the [Contributor Covenant version 1.0.0](http://contributor-covenant.org/version/1/0/0/).

View file

@ -1,25 +1,48 @@
Package: opensensmapr Package: opensensmapr
Type: Package Type: Package
Title: Work with Sensor Data from opensensemap.org in R Title: Client for the Data API of 'openSenseMap.org'
Version: 0.2.1 Version: 0.6.0
URL: http://github.com/noerw/opensensmapR URL: https://github.com/sensebox/opensensmapR
BugReports: http://github.com/noerw/opensensmapR/issues BugReports: https://github.com/sensebox/opensensmapR/issues
Depends:
R (>= 3.5.0)
Imports: Imports:
dplyr, dplyr,
httr, httr,
digest,
lazyeval,
readr,
purrr,
magrittr magrittr
Suggests: Suggests:
readr, maps,
maptools,
tibble,
rgeos,
sf, sf,
knitr, knitr,
rmarkdown rmarkdown,
Author: Norwin Roosen lubridate,
Maintainer: Norwin Roosen <noerw@gmx.de> units,
Description: This package ingests data (measurements, sensorstations) from the jsonlite,
API of opensensemap.org for analysis in R. ggplot2,
The package aims to be compatible with sf and the tidyverse. zoo,
License: GPL-2 lintr,
testthat,
covr
Authors@R: c(person("Norwin", "Roosen", role = c("aut"), email = "hello@nroo.de"),
person("Daniel", "Nuest", role = c("ctb"), email = "daniel.nuest@uni-muenster.de", comment = c(ORCID = "0000-0003-2392-6140")),
person("Jan", "Stenkamp", role = c("ctb", "cre"), email = "jan.stenkamp@uni-muenster.de"))
Description: Download environmental measurements and sensor station metadata
from the API of open data sensor web platform <https://opensensemap.org> for
analysis in R.
This platform provides real time data of more than 1500 low-cost sensor
stations for PM10, PM2.5, temperature, humidity, UV-A intensity and more
phenomena.
The package aims to be compatible with 'sf' and the 'Tidyverse', and provides
several helper functions for data exploration and transformation.
License: GPL (>= 2)
Encoding: UTF-8 Encoding: UTF-8
LazyData: true LazyData: true
RoxygenNote: 6.0.1 RoxygenNote: 7.2.3
VignetteBuilder: knitr VignetteBuilder: knitr

339
LICENSE
View file

@ -1,339 +0,0 @@
GNU GENERAL PUBLIC LICENSE
Version 2, June 1991
Copyright (C) 1989, 1991 Free Software Foundation, Inc., <http://fsf.org/>
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The licenses for most software are designed to take away your
freedom to share and change it. By contrast, the GNU General Public
License is intended to guarantee your freedom to share and change free
software--to make sure the software is free for all its users. This
General Public License applies to most of the Free Software
Foundation's software and to any other program whose authors commit to
using it. (Some other Free Software Foundation software is covered by
the GNU Lesser General Public License instead.) You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
this service if you wish), that you receive source code or can get it
if you want it, that you can change the software or use pieces of it
in new free programs; and that you know you can do these things.
To protect your rights, we need to make restrictions that forbid
anyone to deny you these rights or to ask you to surrender the rights.
These restrictions translate to certain responsibilities for you if you
distribute copies of the software, or if you modify it.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must give the recipients all the rights that
you have. You must make sure that they, too, receive or can get the
source code. And you must show them these terms so they know their
rights.
We protect your rights with two steps: (1) copyright the software, and
(2) offer you this license which gives you legal permission to copy,
distribute and/or modify the software.
Also, for each author's protection and ours, we want to make certain
that everyone understands that there is no warranty for this free
software. If the software is modified by someone else and passed on, we
want its recipients to know that what they have is not the original, so
that any problems introduced by others will not reflect on the original
authors' reputations.
Finally, any free program is threatened constantly by software
patents. We wish to avoid the danger that redistributors of a free
program will individually obtain patent licenses, in effect making the
program proprietary. To prevent this, we have made it clear that any
patent must be licensed for everyone's free use or not licensed at all.
The precise terms and conditions for copying, distribution and
modification follow.
GNU GENERAL PUBLIC LICENSE
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
0. This License applies to any program or other work which contains
a notice placed by the copyright holder saying it may be distributed
under the terms of this General Public License. The "Program", below,
refers to any such program or work, and a "work based on the Program"
means either the Program or any derivative work under copyright law:
that is to say, a work containing the Program or a portion of it,
either verbatim or with modifications and/or translated into another
language. (Hereinafter, translation is included without limitation in
the term "modification".) Each licensee is addressed as "you".
Activities other than copying, distribution and modification are not
covered by this License; they are outside its scope. The act of
running the Program is not restricted, and the output from the Program
is covered only if its contents constitute a work based on the
Program (independent of having been made by running the Program).
Whether that is true depends on what the Program does.
1. You may copy and distribute verbatim copies of the Program's
source code as you receive it, in any medium, provided that you
conspicuously and appropriately publish on each copy an appropriate
copyright notice and disclaimer of warranty; keep intact all the
notices that refer to this License and to the absence of any warranty;
and give any other recipients of the Program a copy of this License
along with the Program.
You may charge a fee for the physical act of transferring a copy, and
you may at your option offer warranty protection in exchange for a fee.
2. You may modify your copy or copies of the Program or any portion
of it, thus forming a work based on the Program, and copy and
distribute such modifications or work under the terms of Section 1
above, provided that you also meet all of these conditions:
a) You must cause the modified files to carry prominent notices
stating that you changed the files and the date of any change.
b) You must cause any work that you distribute or publish, that in
whole or in part contains or is derived from the Program or any
part thereof, to be licensed as a whole at no charge to all third
parties under the terms of this License.
c) If the modified program normally reads commands interactively
when run, you must cause it, when started running for such
interactive use in the most ordinary way, to print or display an
announcement including an appropriate copyright notice and a
notice that there is no warranty (or else, saying that you provide
a warranty) and that users may redistribute the program under
these conditions, and telling the user how to view a copy of this
License. (Exception: if the Program itself is interactive but
does not normally print such an announcement, your work based on
the Program is not required to print an announcement.)
These requirements apply to the modified work as a whole. If
identifiable sections of that work are not derived from the Program,
and can be reasonably considered independent and separate works in
themselves, then this License, and its terms, do not apply to those
sections when you distribute them as separate works. But when you
distribute the same sections as part of a whole which is a work based
on the Program, the distribution of the whole must be on the terms of
this License, whose permissions for other licensees extend to the
entire whole, and thus to each and every part regardless of who wrote it.
Thus, it is not the intent of this section to claim rights or contest
your rights to work written entirely by you; rather, the intent is to
exercise the right to control the distribution of derivative or
collective works based on the Program.
In addition, mere aggregation of another work not based on the Program
with the Program (or with a work based on the Program) on a volume of
a storage or distribution medium does not bring the other work under
the scope of this License.
3. You may copy and distribute the Program (or a work based on it,
under Section 2) in object code or executable form under the terms of
Sections 1 and 2 above provided that you also do one of the following:
a) Accompany it with the complete corresponding machine-readable
source code, which must be distributed under the terms of Sections
1 and 2 above on a medium customarily used for software interchange; or,
b) Accompany it with a written offer, valid for at least three
years, to give any third party, for a charge no more than your
cost of physically performing source distribution, a complete
machine-readable copy of the corresponding source code, to be
distributed under the terms of Sections 1 and 2 above on a medium
customarily used for software interchange; or,
c) Accompany it with the information you received as to the offer
to distribute corresponding source code. (This alternative is
allowed only for noncommercial distribution and only if you
received the program in object code or executable form with such
an offer, in accord with Subsection b above.)
The source code for a work means the preferred form of the work for
making modifications to it. For an executable work, complete source
code means all the source code for all modules it contains, plus any
associated interface definition files, plus the scripts used to
control compilation and installation of the executable. However, as a
special exception, the source code distributed need not include
anything that is normally distributed (in either source or binary
form) with the major components (compiler, kernel, and so on) of the
operating system on which the executable runs, unless that component
itself accompanies the executable.
If distribution of executable or object code is made by offering
access to copy from a designated place, then offering equivalent
access to copy the source code from the same place counts as
distribution of the source code, even though third parties are not
compelled to copy the source along with the object code.
4. You may not copy, modify, sublicense, or distribute the Program
except as expressly provided under this License. Any attempt
otherwise to copy, modify, sublicense or distribute the Program is
void, and will automatically terminate your rights under this License.
However, parties who have received copies, or rights, from you under
this License will not have their licenses terminated so long as such
parties remain in full compliance.
5. You are not required to accept this License, since you have not
signed it. However, nothing else grants you permission to modify or
distribute the Program or its derivative works. These actions are
prohibited by law if you do not accept this License. Therefore, by
modifying or distributing the Program (or any work based on the
Program), you indicate your acceptance of this License to do so, and
all its terms and conditions for copying, distributing or modifying
the Program or works based on it.
6. Each time you redistribute the Program (or any work based on the
Program), the recipient automatically receives a license from the
original licensor to copy, distribute or modify the Program subject to
these terms and conditions. You may not impose any further
restrictions on the recipients' exercise of the rights granted herein.
You are not responsible for enforcing compliance by third parties to
this License.
7. If, as a consequence of a court judgment or allegation of patent
infringement or for any other reason (not limited to patent issues),
conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot
distribute so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you
may not distribute the Program at all. For example, if a patent
license would not permit royalty-free redistribution of the Program by
all those who receive copies directly or indirectly through you, then
the only way you could satisfy both it and this License would be to
refrain entirely from distribution of the Program.
If any portion of this section is held invalid or unenforceable under
any particular circumstance, the balance of the section is intended to
apply and the section as a whole is intended to apply in other
circumstances.
It is not the purpose of this section to induce you to infringe any
patents or other property right claims or to contest validity of any
such claims; this section has the sole purpose of protecting the
integrity of the free software distribution system, which is
implemented by public license practices. Many people have made
generous contributions to the wide range of software distributed
through that system in reliance on consistent application of that
system; it is up to the author/donor to decide if he or she is willing
to distribute software through any other system and a licensee cannot
impose that choice.
This section is intended to make thoroughly clear what is believed to
be a consequence of the rest of this License.
8. If the distribution and/or use of the Program is restricted in
certain countries either by patents or by copyrighted interfaces, the
original copyright holder who places the Program under this License
may add an explicit geographical distribution limitation excluding
those countries, so that distribution is permitted only in or among
countries not thus excluded. In such case, this License incorporates
the limitation as if written in the body of this License.
9. The Free Software Foundation may publish revised and/or new versions
of the General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the Program
specifies a version number of this License which applies to it and "any
later version", you have the option of following the terms and conditions
either of that version or of any later version published by the Free
Software Foundation. If the Program does not specify a version number of
this License, you may choose any version ever published by the Free Software
Foundation.
10. If you wish to incorporate parts of the Program into other free
programs whose distribution conditions are different, write to the author
to ask for permission. For software which is copyrighted by the Free
Software Foundation, write to the Free Software Foundation; we sometimes
make exceptions for this. Our decision will be guided by the two goals
of preserving the free status of all derivatives of our free software and
of promoting the sharing and reuse of software generally.
NO WARRANTY
11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN
OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS
TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE
PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
REPAIR OR CORRECTION.
12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
POSSIBILITY OF SUCH DAMAGES.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
convey the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
{description}
Copyright (C) {year} {fullname}
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
Also add information on how to contact you by electronic and paper mail.
If the program is interactive, make it output a short notice like this
when it starts in an interactive mode:
Gnomovision version 69, Copyright (C) year name of author
Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, the commands you use may
be called something other than `show w' and `show c'; they could even be
mouse-clicks or menu items--whatever suits your program.
You should also get your employer (if you work as a programmer) or your
school, if any, to sign a "copyright disclaimer" for the program, if
necessary. Here is a sample; alter the names:
Yoyodyne, Inc., hereby disclaims all copyright interest in the program
`Gnomovision' (which makes passes at compilers) written by James Hacker.
{signature of Ty Coon}, 1 April 1989
Ty Coon, President of Vice
This General Public License does not permit incorporating your program into
proprietary programs. If your program is a subroutine library, you may
consider it more useful to permit linking proprietary applications with the
library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License.

View file

@ -1,18 +1,29 @@
# Generated by roxygen2: do not edit by hand # Generated by roxygen2: do not edit by hand
S3method("[",osem_measurements)
S3method("[",sensebox)
S3method(osem_measurements,bbox) S3method(osem_measurements,bbox)
S3method(osem_measurements,default) S3method(osem_measurements,default)
S3method(osem_measurements,sensebox) S3method(osem_measurements,sensebox)
S3method(osem_measurements_archive,default)
S3method(osem_measurements_archive,sensebox)
S3method(osem_phenomena,sensebox) S3method(osem_phenomena,sensebox)
S3method(plot,osem_measurements) S3method(plot,osem_measurements)
S3method(plot,sensebox) S3method(plot,sensebox)
S3method(print,osem_measurements)
S3method(print,sensebox) S3method(print,sensebox)
S3method(summary,sensebox) S3method(summary,sensebox)
export(osem_as_sf) export(osem_as_measurements)
export(osem_as_sensebox)
export(osem_box) export(osem_box)
export(osem_boxes) export(osem_boxes)
export(osem_clear_cache)
export(osem_counts) export(osem_counts)
export(osem_endpoint)
export(osem_measurements) export(osem_measurements)
export(osem_measurements_archive)
export(osem_phenomena) export(osem_phenomena)
importFrom(graphics,legend)
importFrom(graphics,par)
importFrom(graphics,plot) importFrom(graphics,plot)
importFrom(magrittr,"%>%") importFrom(magrittr,"%>%")

84
NEWS.md Normal file
View file

@ -0,0 +1,84 @@
# opensensmapr changelog
This project does its best to adhere to semantic versioning.
### 2023-03-06: v0.6.0
- fix package bugs to pass CRAN tests after 4 years of maintenance break
- updated hyperlinks
- don't throw error for empty sensors
- updated tests
- updated maintainer
- updated vignettes
- use precomputed data to create vignettes
- change archive url to 'https://archive.opensensemap.org/' and checking its availability before requesting data
- new features:
- added param bbox for osem_boxes function
- support of multiple grouptags
### 2019-02-09: v0.5.1
- fix package to work with API v6
- box$lastMeasurement may be missing now for long inactive boxes
- add tests
### 2018-10-20: v0.5.0
- fix dynamic method export
- add `osem_measurements_archive()` to fetch measurements from the archive (#23)
- add `box$sensors` containing a data.frame with sensor metadata
- add sensor-IDs to `box$phenomena`
### 2018-09-21: v0.4.3
- dynamically export S3 methods of foreign generics
for compatibility with upcoming R 3.6.0
- add `readr` as default dependency
### 2018-09-05: v0.4.2
- move to sensebox GitHub organization
- pass ... to plot.sensebox()
### 2018-06-07: v0.4.1
- fix `osem_as_measurements()` returning wrong classes
- improve vignettes
- be on CRAN eventually.. hopefully??
### 2018-05-25: v0.4.0
- add caching feature for requests; see vignette osem-serialization
- add vignette osem-serialization
- add vignette osem-history
- fix broken parameter check for osem_measurements(phenomenon = )
- increased test coverage
- package ready for CRAN
### 2018-01-13: v0.3.2
- hide download progress in non interactive sessions (#11)
- fix `print.osem_measurements()`
- fix `summary.sensebox()` `last_measurement_within`
- expose `mar` for plot functions (#12)
- remove deprecated NSE functions from dplyr
- package & documentation improvements
### 2017-11-29: v0.3.1
- compatibility with latest API format (#4)
- add package documentation under `?opensensmapr` (#5)
### 2017-09-04: v0.3.0
- add utility functions: `filter`, `mutate`, `[`, `st_as_sf` for classes `sensebox` and `osem_measurements`
- add `osem_as_sensebox` and `osem_as_measurement` constructors
#### breaking changes
- `osem_as_sf` has moved to `st_as_sf.sensebox` and `st_as_sf.osem_measurements`
### 2017-08-24: v0.2.1
- add labels to `osem_measurements` plots
- add last active counts to `tools/monitor`
#### fixes
- fix regression from #2 for requests without from/to
### 2017-08-23: v0.2.0
- add auto paging for `osem_measurements()`, allowing data retrieval for arbitrary time intervals (#2)
- improve plots for `osem_measurements` & `sensebox` (#1)
- add `sensorId` & `unit` column to `get_measurements()` output by default
- show download progress info, hide readr output
- shorten vignette `osem-intro`
#### breaking changes
- return all string columns of `get_measurements()` as factors

38
R/00utils.R Normal file
View file

@ -0,0 +1,38 @@
# parses from/to params for get_measurements_ and get_boxes_
parse_dateparams = function (from, to) {
from = date_as_utc(from)
to = date_as_utc(to)
if (from - to > 0) stop('"from" must be earlier than "to"')
c(date_as_isostring(from), date_as_isostring(to))
}
# NOTE: cannot handle mixed vectors of POSIXlt and POSIXct
date_as_utc = function (date) {
time = as.POSIXct(date)
attr(time, 'tzone') = 'UTC'
time
}
# NOTE: cannot handle mixed vectors of POSIXlt and POSIXct
date_as_isostring = function (date) format.Date(date, format = '%FT%TZ')
isostring_as_date = function (x) as.POSIXct(strptime(x, format = '%FT%T', tz = 'GMT'))
#' Checks for an interactive session using interactive() and a knitr process in
#' the callstack. See https://stackoverflow.com/a/33108841
#'
#' @noRd
is_non_interactive = function () {
ff = sapply(sys.calls(), function(f) as.character(f[1]))
any(ff %in% c('knit2html', 'render')) || !interactive()
}
#' custom recursive lapply with better handling of NULL values
#' from https://stackoverflow.com/a/38950304
#' @noRd
recursive_lapply = function(x, fn) {
if (is.list(x))
lapply(x, recursive_lapply, fn)
else
fn(x)
}

125
R/api.R
View file

@ -4,12 +4,40 @@
# for CSV responses (get_measurements) the readr package is a hidden dependency # for CSV responses (get_measurements) the readr package is a hidden dependency
# ============================================================================== # ==============================================================================
default_api = 'https://api.opensensemap.org'
#' Get the default openSenseMap API endpoint
#' @export
#' @return A character string with the HTTP URL of the openSenseMap API
osem_endpoint = function() default_api
#' Check if the given openSenseMap API endpoint is available
#' @param endpoint The API base URL to check, defaulting to \code{\link{osem_endpoint}}
#' @return \code{TRUE} if the API is available, otherwise \code{stop()} is called.
osem_ensure_api_available = function(endpoint = osem_endpoint()) {
code = FALSE
try({
code = httr::status_code(httr::GET(endpoint, path='stats'))
}, silent = TRUE)
if (code == 200)
return(TRUE)
errtext = paste('The API at', endpoint, 'is currently not available.')
if (code != FALSE)
errtext = paste0(errtext, ' (HTTP code ', code, ')')
if (endpoint == default_api)
errtext = c(errtext, 'If the issue persists, please check back at https://status.sensebox.de/778247404 and notify support@sensebox.de')
stop(paste(errtext, collapse='\n '), call. = FALSE)
FALSE
}
get_boxes_ = function (..., endpoint) { get_boxes_ = function (..., endpoint) {
response = osem_request_(endpoint, path = c('boxes'), ...) response = osem_get_resource(endpoint, path = c('boxes'), ...)
if (length(response) == 0) { if (length(response) == 0) {
warning('no boxes found for this query') warning('no senseBoxes found for this query')
return(response) return(osem_as_sensebox(as.data.frame(response)))
} }
# parse each list element as sensebox & combine them to a single data.frame # parse each list element as sensebox & combine them to a single data.frame
@ -17,22 +45,23 @@ get_boxes_ = function (..., endpoint) {
df = dplyr::bind_rows(boxesList) df = dplyr::bind_rows(boxesList)
df$exposure = df$exposure %>% as.factor() df$exposure = df$exposure %>% as.factor()
df$model = df$model %>% as.factor() df$model = df$model %>% as.factor()
df$grouptag = df$grouptag %>% as.factor() if (!is.null(df$grouptag)){
df$grouptag = df$grouptag %>% as.factor()
}
df df
} }
get_box_ = function (boxId, endpoint) { get_box_ = function (boxId, endpoint, ...) {
osem_request_(endpoint, path = c('boxes', boxId)) %>% osem_get_resource(endpoint, path = c('boxes', boxId), ..., progress = FALSE) %>%
parse_senseboxdata() parse_senseboxdata()
} }
get_measurements_ = function (..., endpoint) { parse_measurement_csv = function (resText) {
result = osem_request_(endpoint, c('boxes', 'data'), ..., type = 'text')
# parse the CSV response manually & mute readr # parse the CSV response manually & mute readr
suppressWarnings({ suppressWarnings({
result = readr::read_csv(result, col_types = readr::cols( result = readr::read_csv(resText, col_types = readr::cols(
.default = readr::col_factor(NULL), # factor as default would raise issues with concatenation of multiple requests
.default = readr::col_character(),
createdAt = readr::col_datetime(), createdAt = readr::col_datetime(),
value = readr::col_double(), value = readr::col_double(),
lat = readr::col_double(), lat = readr::col_double(),
@ -41,19 +70,79 @@ get_measurements_ = function (..., endpoint) {
)) ))
}) })
class(result) = c('osem_measurements', class(result)) osem_as_measurements(result)
result }
get_measurements_ = function (..., endpoint) {
osem_get_resource(endpoint, c('boxes', 'data'), ..., type = 'text') %>%
parse_measurement_csv
} }
get_stats_ = function (endpoint) { get_stats_ = function (endpoint, cache) {
result = osem_request_(endpoint, path = c('stats')) result = osem_get_resource(endpoint, path = c('stats'), progress = FALSE, cache = cache)
names(result) = c('boxes', 'measurements', 'measurements_per_minute') names(result) = c('boxes', 'measurements', 'measurements_per_minute')
result result
} }
osem_request_ = function (host, path, ..., type = 'parsed') { #' Get any resource from openSenseMap API, possibly cache the response
res = httr::GET(host, httr::progress(), path = path, query = list(...)) #'
#print(res$url) #' @param host API host
#' @param path resource URL
#' @param ... All other parameters interpreted as request query parameters
#' @param type Passed to httr; 'parsed' to return an R object from the response, 'text for a raw response
#' @param progress Boolean whether to print download progress information
#' @param cache Optional path to a directory were responses will be cached. If not NA, no requests will be made when a request for the given is already cached.
#' @return Result of a Request to openSenseMap API
#' @noRd
osem_get_resource = function (host, path, ..., type = 'parsed', progress = TRUE, cache = NA) {
query = list(...)
if (!is.na(cache)) {
filename = osem_cache_filename(path, query, host) %>% paste(cache, ., sep = '/')
if (file.exists(filename))
return(readRDS(filename))
}
res = osem_request_(host, path, query, type, progress)
if (!is.na(cache)) saveRDS(res, filename)
res
}
osem_cache_filename = function (path, query = list(), host = osem_endpoint()) {
httr::modify_url(url = host, path = path, query = query) %>%
digest::digest(algo = 'sha1') %>%
paste('osemcache', ., 'rds', sep = '.')
}
#' Purge cached responses from the given cache directory
#'
#' @param location A path to the cache directory, defaults to the
#' sessions' \code{tempdir()}
#' @return Boolean whether the deletion was successful
#'
#' @export
#' @examples
#' \dontrun{
#' osem_boxes(cache = tempdir())
#' osem_clear_cache()
#'
#' cachedir = paste(getwd(), 'osemcache', sep = '/')
#' dir.create(file.path(cachedir), showWarnings = FALSE)
#' osem_boxes(cache = cachedir)
#' osem_clear_cache(cachedir)
#' }
osem_clear_cache = function (location = tempdir()) {
list.files(location, pattern = 'osemcache\\..*\\.rds') %>%
lapply(function (f) file.remove(paste(location, f, sep = '/'))) %>%
unlist() %>%
all()
}
osem_request_ = function (host, path, query = list(), type = 'parsed', progress = TRUE) {
# stop() if API is not available
osem_ensure_api_available(host)
progress = if (progress && !is_non_interactive()) httr::progress() else NULL
res = httr::GET(host, progress, path = path, query = query)
if (httr::http_error(res)) { if (httr::http_error(res)) {
content = httr::content(res, 'parsed', encoding = 'UTF-8') content = httr::content(res, 'parsed', encoding = 'UTF-8')

173
R/archive.R Normal file
View file

@ -0,0 +1,173 @@
# client for archive.opensensemap.org
# in this archive, CSV files for measurements of each sensor per day is provided.
default_archive_url = 'https://archive.opensensemap.org/'
#' Returns the default endpoint for the archive *download*
#' While the front end domain is archive.opensensemap.org, file downloads
#' are provided via sciebo.
osem_archive_endpoint = function () default_archive_url
#' Fetch day-wise measurements for a single box from the openSenseMap archive.
#'
#' This function is significantly faster than \code{\link{osem_measurements}} for large
#' time-frames, as daily CSV dumps for each sensor from
#' \href{https://archive.opensensemap.org}{archive.opensensemap.org} are used.
#' Note that the latest data available is from the previous day.
#'
#' By default, data for all sensors of a box is fetched, but you can select a
#' subset with a \code{\link[dplyr]{dplyr}}-style NSE filter expression.
#'
#' The function will warn when no data is available in the selected period,
#' but continue the remaining download.
#'
#' @param x A `sensebox data.frame` of a single box, as retrieved via \code{\link{osem_box}},
#' to download measurements for.
#' @param ... see parameters below
#' @param fromDate Start date for measurement download, must be convertable via `as.Date`.
#' @param toDate End date for measurement download (inclusive).
#' @param sensorFilter A NSE formula matching to \code{x$sensors}, selecting a subset of sensors.
#' @param progress Whether to print download progress information, defaults to \code{TRUE}.
#' @return A \code{tbl_df} containing observations of all selected sensors for each time stamp.
#'
#' @seealso \href{https://archive.opensensemap.org}{openSenseMap archive}
#' @seealso \code{\link{osem_measurements}}
#' @seealso \code{\link{osem_box}}
#'
#' @export
osem_measurements_archive = function (x, ...) UseMethod('osem_measurements_archive')
#' @export
osem_measurements_archive.default = function (x, ...) {
# NOTE: to implement for a different class:
# in order to call `archive_fetch_measurements()`, `box` must be a dataframe
# with a single row and the columns `X_id` and `name`
stop(paste('not implemented for class', toString(class(x))))
}
# ==============================================================================
#
#' @describeIn osem_measurements_archive Get daywise measurements for one or more sensors of a single box.
#' @export
#' @examples
#' \donttest{
#' # fetch measurements for a single day
#' box = osem_box('593bcd656ccf3b0011791f5a')
#' m = osem_measurements_archive(box, as.POSIXlt('2018-09-13'))
#'
#' # fetch measurements for a date range and selected sensors
#' sensors = ~ phenomenon %in% c('Temperatur', 'Beleuchtungsstärke')
#' m = osem_measurements_archive(
#' box,
#' as.POSIXlt('2018-09-01'), as.POSIXlt('2018-09-30'),
#' sensorFilter = sensors
#' )
#' }
osem_measurements_archive.sensebox = function (x, fromDate, toDate = fromDate, sensorFilter = ~ TRUE, ..., progress = TRUE) {
if (nrow(x) != 1)
stop('this function only works for exactly one senseBox!')
# filter sensors using NSE, for example: `~ phenomenon == 'Temperatur'`
sensors = x$sensors[[1]] %>%
dplyr::filter(lazyeval::f_eval(sensorFilter, .))
# fetch each sensor separately
dfs = by(sensors, 1:nrow(sensors), function (sensor) {
df = archive_fetch_measurements(x, sensor$id, fromDate, toDate, progress) %>%
dplyr::select(createdAt, value) %>%
#dplyr::mutate(unit = sensor$unit, sensor = sensor$sensor) %>% # inject sensor metadata
dplyr::rename_at(., 'value', function(v) sensor$phenomenon)
})
# merge all data.frames by timestamp
dfs %>% purrr::reduce(dplyr::full_join, 'createdAt')
}
#' fetch measurements from archive from a single box, and a single sensor
#'
#' @param box A sensebox data.frame with a single box
#' @param sensorId Character specifying the sensor
#' @param fromDate Start date for measurement download, must be convertable via `as.Date`.
#' @param toDate End date for measurement download (inclusive).
#' @param progress whether to print progress
#' @return A \code{tbl_df} containing observations of all selected sensors for each time stamp.
archive_fetch_measurements = function (box, sensorId, fromDate, toDate, progress) {
osem_ensure_archive_available()
dates = list()
from = fromDate
while (from <= toDate) {
dates = append(dates, list(from))
from = from + as.difftime(1, units = 'days')
}
http_handle = httr::handle(osem_archive_endpoint()) # reuse the http connection for speed!
progress = if (progress && !is_non_interactive()) httr::progress() else NULL
measurements = lapply(dates, function(date) {
url = build_archive_url(date, box, sensorId)
res = httr::GET(url, progress, handle = http_handle)
if (httr::http_error(res)) {
warning(paste(
httr::status_code(res),
'on day', format.Date(date, '%F'),
'for sensor', sensorId
))
if (httr::status_code(res) == 404)
return(data.frame(createdAt = as.POSIXlt(x = integer(0), origin = date), value = double()))
}
measurements = httr::content(res, type = 'text', encoding = 'UTF-8') %>%
parse_measurement_csv
})
measurements %>% dplyr::bind_rows()
}
#' returns URL to fetch measurements from a sensor for a specific date,
#' based on `osem_archive_endpoint()`
#' @noRd
build_archive_url = function (date, box, sensorId) {
d = format.Date(date, '%F')
format = 'csv'
paste(
osem_archive_endpoint(),
d,
osem_box_to_archivename(box),
paste(paste(sensorId, d, sep = '-'), format, sep = '.'),
sep = '/'
)
}
#' replace chars in box name according to archive script:
#' https://github.com/sensebox/osem-archiver/blob/612e14b/helpers.sh#L66
#'
#' @param box A sensebox data.frame
#' @return character with archive identifier for each box
osem_box_to_archivename = function (box) {
name = gsub('[^A-Za-z0-9._-]', '_', box$name)
paste(box$X_id, name, sep = '-')
}
#' Check if the given openSenseMap archive endpoint is available
#' @param endpoint The archive base URL to check, defaulting to \code{\link{osem_archive_endpoint}}
#' @return \code{TRUE} if the archive is available, otherwise \code{stop()} is called.
osem_ensure_archive_available = function(endpoint = osem_archive_endpoint()) {
code = FALSE
try({
code = httr::status_code(httr::GET(endpoint))
}, silent = TRUE)
if (code == 200)
return(TRUE)
errtext = paste('The archive at', endpoint, 'is currently not available.')
if (code != FALSE)
errtext = paste0(errtext, ' (HTTP code ', code, ')')
stop(paste(errtext, collapse='\n '), call. = FALSE)
FALSE
}

151
R/box.R
View file

@ -18,25 +18,64 @@
#' @param to Only return boxes that were measuring earlier than this time #' @param to Only return boxes that were measuring earlier than this time
#' @param phenomenon Only return boxes that measured the given phenomenon in the #' @param phenomenon Only return boxes that measured the given phenomenon in the
#' time interval as specified through \code{date} or \code{from / to} #' time interval as specified through \code{date} or \code{from / to}
#' @param bbox Only return boxes that are within the given boundingbox,
#' vector of 4 WGS84 coordinates.
#' Order is: longitude southwest, latitude southwest, longitude northeast, latitude northeast.
#' Minimal and maximal values are: -180, 180 for longitude and -90, 90 for latitude.
#' @param endpoint The URL of the openSenseMap API instance #' @param endpoint The URL of the openSenseMap API instance
#' @param progress Whether to print download progress information, defaults to \code{TRUE}
#' @param cache Whether to cache the result, defaults to false.
#' If a valid path to a directory is given, the response will be cached there.
#' Subsequent identical requests will return the cached data instead.
#' @return A \code{sensebox data.frame} containing a box in each row #' @return A \code{sensebox data.frame} containing a box in each row
#' #'
#' @seealso \href{https://docs.opensensemap.org/#api-Measurements-findAllBoxes}{openSenseMap API documentation (web)} #' @seealso \href{https://docs.opensensemap.org/#api-Measurements-findAllBoxes}{openSenseMap API documentation (web)}
#' @seealso \code{\link{osem_phenomena}} #' @seealso \code{\link{osem_phenomena}}
#' @seealso \code{\link{osem_box}}
#' @seealso \code{\link{osem_clear_cache}}
#'
#' @export #' @export
#' @examples #' @examples
#' # get *all* boxes available on the API
#' b = osem_boxes()
#' #'
#' # get all boxes with grouptag 'ifgi' that are placed outdoors #' \dontrun{
#' b = osem_boxes(grouptag = 'ifgi', exposure = 'outdoor') #' # get *all* boxes available on the API
#' b = osem_boxes()
#' #'
#' # get all boxes that have measured PM2.5 in the last 4 hours #' # get all boxes with grouptag 'ifgi' that are placed outdoors
#' b = osem_boxes(date = Sys.time(), phenomenon = 'PM2.5') #' b = osem_boxes(grouptag = 'ifgi', exposure = 'outdoor')
#' #'
#' # get all boxes with model 'luftdaten_sds011_dht22'
#' b = osem_boxes(grouptag = 'ifgi')
#'
#' # get all boxes that have measured PM2.5 in the last 4 hours
#' b = osem_boxes(date = Sys.time(), phenomenon = 'PM2.5')
#'
#' # get all boxes that have measured PM2.5 between Jan & Feb 2018
#' library(lubridate)
#' b = osem_boxes(
#' from = date('2018-01-01'),
#' to = date('2018-02-01'),
#' phenomenon = 'PM2.5'
#' )
#'
#' # get all boxes from a custom (selfhosted) openSenseMap API
#' b = osem_box(endpoint = 'http://api.my-custom-osem.com')
#'
#' # get all boxes and cache the response, in order to provide
#' # reproducible results in the future. Also useful for development
#' # to avoid repeated loading times!
#' b = osem_boxes(cache = getwd())
#' b = osem_boxes(cache = getwd())
#'
#' # get *all* boxes available on the API, without showing download progress
#' b = osem_boxes(progress = FALSE)
#' }
osem_boxes = function (exposure = NA, model = NA, grouptag = NA, osem_boxes = function (exposure = NA, model = NA, grouptag = NA,
date = NA, from = NA, to = NA, phenomenon = NA, date = NA, from = NA, to = NA, phenomenon = NA,
endpoint = 'https://api.opensensemap.org') { bbox = NA,
endpoint = osem_endpoint(),
progress = TRUE,
cache = NA) {
# error, if phenomenon, but no time given # error, if phenomenon, but no time given
if (!is.na(phenomenon) && is.na(date) && is.na(to) && is.na(from)) if (!is.na(phenomenon) && is.na(date) && is.na(to) && is.na(from))
@ -54,16 +93,18 @@ osem_boxes = function (exposure = NA, model = NA, grouptag = NA,
stop('Parameter "from"/"to" must be used together') stop('Parameter "from"/"to" must be used together')
} }
query = list(endpoint = endpoint) query = list(endpoint = endpoint, progress = progress, cache = cache)
if (!is.na(exposure)) query$exposure = exposure if (!is.na(exposure)) query$exposure = exposure
if (!is.na(model)) query$model = model if (!is.na(model)) query$model = model
if (!is.na(grouptag)) query$grouptag = grouptag if (!is.na(grouptag)) query$grouptag = grouptag
if (!is.na(phenomenon)) query$phenomenon = phenomenon if (!is.na(phenomenon)) query$phenomenon = phenomenon
if (all(!is.na(bbox))) query$bbox = paste(bbox, collapse = ', ')
if (!is.na(to) && !is.na(from)) if (!is.na(to) && !is.na(from))
query$date = parse_dateparams(from, to) %>% paste(collapse = ',') query$date = parse_dateparams(from, to) %>% paste(collapse = ',')
else if (!is.na(date)) else if (!is.na(date))
query$date = utc_date(date) %>% date_as_isostring() query$date = date_as_utc(date) %>% date_as_isostring()
do.call(get_boxes_, query) do.call(get_boxes_, query)
} }
@ -74,17 +115,29 @@ osem_boxes = function (exposure = NA, model = NA, grouptag = NA,
#' #'
#' @param boxId A string containing a senseBox ID #' @param boxId A string containing a senseBox ID
#' @param endpoint The URL of the openSenseMap API instance #' @param endpoint The URL of the openSenseMap API instance
#' @param cache Whether to cache the result, defaults to false.
#' If a valid path to a directory is given, the response will be cached there. Subsequent identical requests will return the cached data instead.
#' @return A \code{sensebox data.frame} containing a box in each row #' @return A \code{sensebox data.frame} containing a box in each row
#' #'
#' @seealso \href{https://docs.opensensemap.org/#api-Measurements-findAllBoxes}{openSenseMap API documentation (web)} #' @seealso \href{https://docs.opensensemap.org/#api-Measurements-findAllBoxes}{openSenseMap API documentation (web)}
#' @seealso \code{\link{osem_phenomena}} #' @seealso \code{\link{osem_phenomena}}
#' @seealso \code{\link{osem_boxes}}
#' @seealso \code{\link{osem_clear_cache}}
#' @export #' @export
#' @examples #' @examples
#' # get a specific box by ID #' \dontrun{
#' b = osem_box('593bcd656ccf3b0011791f5a') #' # get a specific box by ID
#' b = osem_box('57000b8745fd40c8196ad04c')
#' #'
osem_box = function (boxId, endpoint = 'https://api.opensensemap.org') { #' # get a specific box by ID from a custom (selfhosted) openSenseMap API
get_box_(boxId, endpoint = endpoint) #' b = osem_box('51030b8725fd30c2196277da', 'http://api.my-custom-osem.com')
#'
#' # get a specific box by ID and cache the response, in order to provide
#' # reproducible results in the future.
#' b = osem_box('51030b8725fd30c2196277da', cache = tempdir())
#' }
osem_box = function (boxId, endpoint = osem_endpoint(), cache = NA) {
get_box_(boxId, endpoint = endpoint, cache = cache)
} }
# ============================================================================== # ==============================================================================
@ -100,32 +153,64 @@ parse_senseboxdata = function (boxdata) {
# extract nested lists for later use & clean them from the list # extract nested lists for later use & clean them from the list
# to allow a simple data.frame structure # to allow a simple data.frame structure
sensors = boxdata$sensors sensors = boxdata$sensors
location = boxdata$loc location = boxdata$currentLocation
boxdata[c('loc', 'sensors', 'image', 'boxType')] <- NULL lastMeasurement = boxdata$lastMeasurementAt # rename for backwards compat < 0.5.1
thebox = as.data.frame(boxdata, stringsAsFactors = F) grouptags = boxdata$grouptag
boxdata[c(
'loc', 'locations', 'currentLocation', 'sensors', 'image', 'boxType', 'lastMeasurementAt', 'grouptag'
)] = NULL
thebox = as.data.frame(boxdata, stringsAsFactors = FALSE)
# parse timestamps (updatedAt might be not defined) # parse timestamps (updatedAt might be not defined)
thebox$createdAt = as.POSIXct(strptime(thebox$createdAt, format='%FT%T', tz = 'GMT')) thebox$createdAt = isostring_as_date(thebox$createdAt)
if (!is.null(thebox$updatedAt)) if (!is.null(thebox$updatedAt))
thebox$updatedAt = as.POSIXct(strptime(thebox$updatedAt, format='%FT%T', tz = 'GMT')) thebox$updatedAt = isostring_as_date(thebox$updatedAt)
if (!is.null(lastMeasurement))
thebox$lastMeasurement = isostring_as_date(lastMeasurement)
# add empty sensortype to sensors without type
if(!('sensorType' %in% names(sensors[[1]]))) {
sensors[[1]]$sensorType <- NA
}
# create a dataframe of sensors
thebox$sensors = sensors %>%
recursive_lapply(function (x) if (is.null(x)) NA else x) %>% # replace NULLs with NA
lapply(as.data.frame, stringsAsFactors = FALSE) %>%
dplyr::bind_rows(.) %>%
dplyr::select(phenomenon = title, id = X_id, unit, sensor = sensorType) %>%
list
# extract metadata from sensors # extract metadata from sensors
thebox$phenomena = list(unlist(lapply(sensors, function(s) { s$title }))) thebox$phenomena = sensors %>%
# FIXME: if one sensor has NA, max() returns bullshit stats::setNames(lapply(., function (s) s$`_id`)) %>%
thebox$lastMeasurement = max(lapply(sensors, function(s) { lapply(function(s) s$title) %>%
if (!is.null(s$lastMeasurement)) unlist %>% list # convert to vector
as.POSIXct(strptime(s$lastMeasurement$createdAt, format = '%FT%T', tz = 'GMT'))
else
NA
})[[1]])
# extract coordinates & transform to simple feature object # extract coordinates & transform to simple feature object
thebox$lon = location[[1]]$geometry$coordinates[[1]] thebox$lon = location$coordinates[[1]]
thebox$lat = location[[1]]$geometry$coordinates[[2]] thebox$lat = location$coordinates[[2]]
if (length(location[[1]]$geometry$coordinates) == 3) thebox$locationtimestamp = isostring_as_date(location$timestamp)
thebox$height = location[[1]]$geometry$coordinates[[3]] if (length(location$coordinates) == 3)
thebox$height = location$coordinates[[3]]
# extract grouptag(s) from box
if (length(grouptags) == 0)
thebox$grouptag = NULL
if (length(grouptags) > 0) {
# if box does not have grouptag dont set attribute
if(grouptags[[1]] == '') {
thebox$grouptag = NULL
}
else {
thebox$grouptag = grouptags[[1]]
}
}
if (length(grouptags) > 1)
thebox$grouptag2 = grouptags[[2]]
if (length(grouptags) > 2)
thebox$grouptag3 = grouptags[[3]]
# attach a custom class for methods # attach a custom class for methods
class(thebox) = c('sensebox', class(thebox)) osem_as_sensebox(thebox)
thebox
} }

View file

@ -1,71 +1,59 @@
#' @export #' @export
plot.sensebox = function (x, ...) { plot.sensebox = function (x, ..., mar = c(2, 2, 1, 1)) {
if (
!requireNamespace("sf", quietly = TRUE) ||
!requireNamespace("maps", quietly = TRUE) ||
!requireNamespace("maptools", quietly = TRUE) ||
!requireNamespace("rgeos", quietly = TRUE)
) {
stop('this functions requires the packages sf, maps, maptools, rgeos')
}
geom = x %>% geom = x %>%
osem_as_sf() %>% sf::st_as_sf() %>%
sf::st_geometry() sf::st_geometry()
bbox = sf::st_bbox(geom) bbox = sf::st_bbox(geom)
library(maps)
world = maps::map('world', plot = FALSE, fill = TRUE) %>% world = maps::map('world', plot = FALSE, fill = TRUE) %>%
sf::st_as_sf() %>% sf::st_as_sf() %>%
sf::st_geometry() sf::st_geometry()
oldpar = par() oldpar <- par(no.readonly = TRUE)
par(mar = c(2,2,1,1)) on.exit(par(oldpar))
plot(world, col = 'gray', xlim = bbox[c(1,3)], ylim = bbox[c(2,4)], axes = T) par(mar = mar)
plot(geom, add = T, col = x$exposure) plot(world, col = 'gray', xlim = bbox[c(1, 3)], ylim = bbox[c(2, 4)], axes = TRUE, ...)
plot(geom, add = TRUE, col = x$exposure, ...)
legend('left', legend = levels(x$exposure), col = 1:length(x$exposure), pch = 1) legend('left', legend = levels(x$exposure), col = 1:length(x$exposure), pch = 1)
par(mar = oldpar$mar)
invisible(x) invisible(x)
} }
#' @export #' @export
print.sensebox = function(x, ...) { print.sensebox = function(x, columns = c('name', 'exposure', 'lastMeasurement', 'phenomena'), ...) {
important_columns = c('name', 'exposure', 'lastMeasurement', 'phenomena')
data = as.data.frame(x) data = as.data.frame(x)
print(data[important_columns], ...) print(dplyr::select(data, dplyr::one_of(columns)), ...)
invisible(x) invisible(x)
} }
#' @export #' @export
summary.sensebox = function(object, ...) { summary.sensebox = function(object, ...) {
cat('box total:', nrow(object), fill = T) cat('boxes total:', nrow(object), fill = TRUE)
cat('\nboxes by exposure:') cat('\nboxes by exposure:')
table(object$exposure) %>% print() table(object$exposure) %>% print()
cat('\nboxes by model:') cat('\nboxes by model:')
table(object$model) %>% print() table(object$model) %>% print()
cat('\n') cat('\n')
diffNow = (utc_date(Sys.time()) - object$lastMeasurement) %>% as.numeric(unit='hours') diffNow = (date_as_utc(Sys.time()) - object$lastMeasurement) %>% as.numeric(unit = 'hours')
neverActive = object[is.na(object$lastMeasurement), ] %>% nrow()
list( list(
'last_measurement_within' = c( 'last_measurement_within' = c(
'1h' = nrow(object[diffNow <= 1, ]) - neverActive, '1h' = nrow(dplyr::filter(object, diffNow <= 1)),
'1d' = nrow(object[diffNow <= 24, ]) - neverActive, '1d' = nrow(dplyr::filter(object, diffNow <= 24)),
'30d' = nrow(object[diffNow <= 720, ]) - neverActive, '30d' = nrow(dplyr::filter(object, diffNow <= 720)),
'365d' = nrow(object[diffNow <= 8760, ]) - neverActive, '365d' = nrow(dplyr::filter(object, diffNow <= 8760)),
'never' = neverActive 'never' = nrow(dplyr::filter(object, is.na(lastMeasurement)))
) )
) %>% print() ) %>% print()
oldest = object[object$createdAt == min(object$createdAt), ] oldest = object[object$createdAt == min(object$createdAt), ]
newest = object[object$createdAt == max(object$createdAt), ] newest = object[object$createdAt == max(object$createdAt), ]
cat('oldest box:', format(oldest$createdAt, '%F %T'), paste0('(', oldest$name, ')'), fill = T) cat('oldest box:', format(oldest$createdAt, '%F %T'), paste0('(', oldest$name, ')'), fill = TRUE)
cat('newest box:', format(newest$createdAt, '%F %T'), paste0('(', newest$name, ')'), fill = T) cat('newest box:', format(newest$createdAt, '%F %T'), paste0('(', newest$name, ')'), fill = TRUE)
cat('\nsensors per box:', fill = T) cat('\nsensors per box:', fill = TRUE)
lapply(object$phenomena, length) %>% lapply(object$phenomena, length) %>%
as.numeric() %>% as.numeric() %>%
summary() %>% summary() %>%
@ -73,3 +61,13 @@ summary.sensebox = function(object, ...) {
invisible(object) invisible(object)
} }
#' Converts a foreign object to a sensebox data.frame.
#' @param x A data.frame to attach the class to
#' @return data.frame of class \code{sensebox}
#' @export
osem_as_sensebox = function(x) {
ret = as.data.frame(x)
class(ret) = c('sensebox', class(x))
ret
}

View file

@ -7,10 +7,13 @@
#' @details Note that the API caches these values for 5 minutes. #' @details Note that the API caches these values for 5 minutes.
#' #'
#' @param endpoint The URL of the openSenseMap API #' @param endpoint The URL of the openSenseMap API
#' @param cache Whether to cache the result, defaults to false.
#' If a valid path to a directory is given, the response will be cached there.
#' Subsequent identical requests will return the cached data instead.
#' @return A named \code{list} containing the counts #' @return A named \code{list} containing the counts
#' #'
#' @export #' @export
#' @seealso \href{https://docs.opensensemap.org/#api-Misc-getStatistics}{openSenseMap API documentation (web)} #' @seealso \href{https://docs.opensensemap.org/#api-Misc-getStatistics}{openSenseMap API documentation (web)}
osem_counts = function (endpoint = 'https://api.opensensemap.org') { osem_counts = function(endpoint = osem_endpoint(), cache = NA) {
get_stats_(endpoint) get_stats_(endpoint, cache)
} }

126
R/external_generics.R Normal file
View file

@ -0,0 +1,126 @@
# helpers for the dplyr & co related functions
# also delayed method registration
#
# Methods for external generics (except when from `base`) should be registered,
# but not exported: see https://github.com/klutometis/roxygen/issues/796
# Until roxygen supports this usecase properly, we're using a different
# workaround than suggested, copied from edzer's sf package:
# dynamically register the methods only when the related package is loaded as well.
# ====================== base generics =========================
#' maintains class / attributes after subsetting
#' @noRd
#' @export
`[.sensebox` = function(x, i, ...) {
s = NextMethod('[')
mostattributes(s) = attributes(s)
s
}
#' maintains class / attributes after subsetting
#' @noRd
#' @export
`[.osem_measurements` = function(x, i, ...) {
s = NextMethod()
mostattributes(s) = attributes(x)
s
}
# ====================== dplyr generics =========================
#' Simple factory function meant to implement dplyr functions for other classes,
#' which call an callback to attach the original class again after the fact.
#'
#' @param callback The function to call after the dplyr function
#' @noRd
dplyr_class_wrapper = function(callback) {
function(.data, ..., .dots) callback(NextMethod())
}
#' Return rows with matching conditions, while maintaining class & attributes
#' @param .data A sensebox data.frame to filter
#' @param .dots see corresponding function in package \code{\link{dplyr}}
#' @param ... other arguments
#' @seealso \code{\link[dplyr]{filter}}
filter.sensebox = dplyr_class_wrapper(osem_as_sensebox)
#' Add new variables to the data, while maintaining class & attributes
#' @param .data A sensebox data.frame to mutate
#' @param .dots see corresponding function in package \code{\link{dplyr}}
#' @param ... other arguments
#' @seealso \code{\link[dplyr]{mutate}}
mutate.sensebox = dplyr_class_wrapper(osem_as_sensebox)
#' Return rows with matching conditions, while maintaining class & attributes
#' @param .data A osem_measurements data.frame to filter
#' @param .dots see corresponding function in package \code{\link{dplyr}}
#' @param ... other arguments
#' @seealso \code{\link[dplyr]{filter}}
filter.osem_measurements = dplyr_class_wrapper(osem_as_measurements)
#' Add new variables to the data, while maintaining class & attributes
#' @param .data A osem_measurements data.frame to mutate
#' @param .dots see corresponding function in package \code{\link{dplyr}}
#' @param ... other arguments
#' @seealso \code{\link[dplyr]{mutate}}
mutate.osem_measurements = dplyr_class_wrapper(osem_as_measurements)
# ====================== sf generics =========================
#' Convert a \code{sensebox} dataframe to an \code{\link[sf]{st_sf}} object.
#'
#' @param x The object to convert
#' @param ... maybe more objects to convert
#' @return The object with an st_geometry column attached.
st_as_sf.sensebox = function (x, ...) {
NextMethod(x, ..., coords = c('lon', 'lat'), crs = 4326)
}
#' Convert a \code{osem_measurements} dataframe to an \code{\link[sf]{st_sf}} object.
#'
#' @param x The object to convert
#' @param ... maybe more objects to convert
#' @return The object with an st_geometry column attached.
st_as_sf.osem_measurements = function (x, ...) {
NextMethod(x, ..., coords = c('lon', 'lat'), crs = 4326)
}
# from: https://github.com/tidyverse/hms/blob/master/R/zzz.R
# Thu Apr 19 10:53:24 CEST 2018
register_s3_method <- function(pkg, generic, class, fun = NULL) {
stopifnot(is.character(pkg), length(pkg) == 1)
stopifnot(is.character(generic), length(generic) == 1)
stopifnot(is.character(class), length(class) == 1)
if (is.null(fun)) {
fun <- get(paste0(generic, ".", class), envir = parent.frame())
} else {
stopifnot(is.function(fun))
}
if (pkg %in% loadedNamespaces()) {
registerS3method(generic, class, fun, envir = asNamespace(pkg))
}
# Always register hook in case package is later unloaded & reloaded
setHook(
packageEvent(pkg, "onLoad"),
function(...) {
registerS3method(generic, class, fun, envir = asNamespace(pkg))
}
)
}
.onLoad = function(libname, pkgname) {
register_s3_method('dplyr', 'filter', 'sensebox')
register_s3_method('dplyr', 'mutate', 'sensebox')
register_s3_method('dplyr', 'filter', 'osem_measurements')
register_s3_method('dplyr', 'mutate', 'osem_measurements')
register_s3_method('sf', 'st_as_sf', 'sensebox')
register_s3_method('sf', 'st_as_sf', 'osem_measurements')
}

View file

@ -1,6 +1,6 @@
# ============================================================================== # ==============================================================================
# #
#' Get the Measurements of a Phenomenon on opensensemap.org #' Fetch the Measurements of a Phenomenon on opensensemap.org
#' #'
#' Measurements can be retrieved either for a set of boxes, or through a spatial #' Measurements can be retrieved either for a set of boxes, or through a spatial
#' bounding box filter. To get all measurements, the \code{default} function applies #' bounding box filter. To get all measurements, the \code{default} function applies
@ -18,15 +18,20 @@
#' @param exposure Filter sensors by their exposure ('indoor', 'outdoor', 'mobile') #' @param exposure Filter sensors by their exposure ('indoor', 'outdoor', 'mobile')
#' @param from A \code{POSIXt} like object to select a time interval #' @param from A \code{POSIXt} like object to select a time interval
#' @param to A \code{POSIXt} like object to select a time interval #' @param to A \code{POSIXt} like object to select a time interval
#' @param columns Select specific column in the output (see oSeM documentation) #' @param columns Select specific column in the output (see openSenseMap API documentation)
#' @param endpoint The URL of the openSenseMap API #' @param endpoint The URL of the openSenseMap API
#' @param progress Whether to print download progress information
#' @param cache Whether to cache the result, defaults to false.
#' If a valid path to a directory is given, the response will be cached there. Subsequent identical requests will return the cached data instead.
#' #'
#' @return An \code{osem_measurements data.frame} containing the #' @return An \code{osem_measurements data.frame} containing the
#' requested measurements #' requested measurements
#' #'
#' @export #' @export
#' @seealso \href{https://docs.opensensemap.org/#api-Measurements-getDataMulti}{openSenseMap API documentation (web)} #' @seealso \href{https://docs.opensensemap.org/#api-Measurements-getDataMulti}{openSenseMap API documentation (web)}
#' @seealso \code{\link{osem_box}}
#' @seealso \code{\link{osem_boxes}} #' @seealso \code{\link{osem_boxes}}
#' @seealso \code{\link{osem_clear_cache}}
osem_measurements = function (x, ...) UseMethod('osem_measurements') osem_measurements = function (x, ...) UseMethod('osem_measurements')
# ============================================================================== # ==============================================================================
@ -34,11 +39,29 @@ osem_measurements = function (x, ...) UseMethod('osem_measurements')
#' @describeIn osem_measurements Get measurements from \strong{all} senseBoxes. #' @describeIn osem_measurements Get measurements from \strong{all} senseBoxes.
#' @export #' @export
#' @examples #' @examples
#' # get measurements from all boxes
#' \dontrun{ #' \dontrun{
#' osem_measurements('PM2.5') #' # get measurements from all boxes on the phenomenon 'PM10' from the last 48h
#' } #' m = osem_measurements('PM10')
#' #'
#' # get measurements from all mobile boxes on the phenomenon 'PM10' from the last 48h
#' m = osem_measurements('PM10', exposure = 'mobile')
#'
#' # get measurements and cache them locally in the working directory.
#' # subsequent identical requests will load from the cache instead, ensuring
#' # reproducibility and saving time and bandwidth!
#' m = osem_measurements('PM10', exposure = 'mobile', cache = getwd())
#' m = osem_measurements('PM10', exposure = 'mobile', cache = getwd())
#'
#' # get measurements returning a custom selection of columns
#' m = osem_measurements('PM10', exposure = 'mobile', columns = c(
#' 'value',
#' 'boxId',
#' 'sensorType',
#' 'lat',
#' 'lon',
#' 'height'
#' ))
#' }
osem_measurements.default = function (x, ...) { osem_measurements.default = function (x, ...) {
bbox = structure(c(-180, -90, 180, 90), class = 'bbox') bbox = structure(c(-180, -90, 180, 90), class = 'bbox')
osem_measurements(bbox, x, ...) osem_measurements(bbox, x, ...)
@ -49,18 +72,35 @@ osem_measurements.default = function (x, ...) {
#' @describeIn osem_measurements Get measurements by a spatial filter. #' @describeIn osem_measurements Get measurements by a spatial filter.
#' @export #' @export
#' @examples #' @examples
#' # get measurements from sensors within a bounding box #' \dontrun{
#' bbox = structure(c(7, 51, 8, 52), class = 'bbox') #' # get measurements from sensors within a custom WGS84 bounding box
#' osem_measurements(bbox, 'Temperatur') #' bbox = structure(c(7, 51, 8, 52), class = 'bbox')
#' m = osem_measurements(bbox, 'Temperatur')
#' #'
#' points = sf::st_multipoint(matrix(c(7, 8, 51, 52), 2, 2)) #' # construct a bounding box 12km around berlin using the sf package,
#' bbox2 = sf::st_bbox(points) #' # and get measurements from stations within that box
#' osem_measurements(bbox2, 'Temperatur', exposure = 'outdoor') #' library(sf)
#' library(units)
#' bbox2 = st_point(c(13.4034, 52.5120)) %>%
#' st_sfc(crs = 4326) %>%
#' st_transform(3857) %>% # allow setting a buffer in meters
#' st_buffer(set_units(12, km)) %>%
#' st_transform(4326) %>% # the opensensemap expects WGS 84
#' st_bbox()
#' m = osem_measurements(bbox2, 'Temperatur', exposure = 'outdoor')
#' #'
#' # construct a bounding box from two points,
#' # and get measurements from stations within that box
#' points = st_multipoint(matrix(c(7.5, 7.8, 51.7, 52), 2, 2))
#' bbox3 = st_bbox(points)
#' m = osem_measurements(bbox2, 'Temperatur', exposure = 'outdoor')
#' }
osem_measurements.bbox = function (x, phenomenon, exposure = NA, osem_measurements.bbox = function (x, phenomenon, exposure = NA,
from = NA, to = NA, columns = NA, from = NA, to = NA, columns = NA,
..., ...,
endpoint = 'https://api.opensensemap.org') { endpoint = osem_endpoint(),
progress = TRUE,
cache = NA) {
bbox = x bbox = x
environment() %>% environment() %>%
as.list() %>% as.list() %>%
@ -73,18 +113,32 @@ osem_measurements.bbox = function (x, phenomenon, exposure = NA,
#' @describeIn osem_measurements Get measurements from a set of senseBoxes. #' @describeIn osem_measurements Get measurements from a set of senseBoxes.
#' @export #' @export
#' @examples #' @examples
#' # get measurements from a set of boxes #' \donttest{
#' b = osem_boxes(grouptag = 'ifgi') #' # get measurements from a set of boxes
#' osem_measurements(b, phenomenon = 'Temperatur') #' b = osem_boxes(grouptag = 'ifgi')
#' m4 = osem_measurements(b, phenomenon = 'Temperatur')
#' #'
#' # ...or a single box #' # ...or a single box
#' b = osem_box('593bcd656ccf3b0011791f5a') #' b = osem_box('57000b8745fd40c8196ad04c')
#' osem_measurements(b, phenomenon = 'Temperatur') #' m5 = osem_measurements(b, phenomenon = 'Temperatur')
#' #'
#' # get measurements from a single box on the from the last 40 days.
#' # requests are paged for long time frames, so the APIs limitation
#' # does not apply!
#' library(lubridate)
#' m1 = osem_measurements(
#' b,
#' 'Temperatur',
#' to = now(),
#' from = now() - days(40)
#' )
#' }
osem_measurements.sensebox = function (x, phenomenon, exposure = NA, osem_measurements.sensebox = function (x, phenomenon, exposure = NA,
from = NA, to = NA, columns = NA, from = NA, to = NA, columns = NA,
..., ...,
endpoint = 'https://api.opensensemap.org') { endpoint = osem_endpoint(),
progress = TRUE,
cache = NA) {
boxes = x boxes = x
environment() %>% environment() %>%
as.list() %>% as.list() %>%
@ -102,30 +156,37 @@ osem_measurements.sensebox = function (x, phenomenon, exposure = NA,
#' @return A named \code{list} of parsed parameters. #' @return A named \code{list} of parsed parameters.
#' @noRd #' @noRd
parse_get_measurements_params = function (params) { parse_get_measurements_params = function (params) {
if (is.null(params$phenomenon) | is.na(params$phenomenon)) if (is.symbol(params$phenomenon) || is.null(params$phenomenon) || is.na(params$phenomenon))
stop('Parameter "phenomenon" is required') stop('Parameter "phenomenon" is required')
if (!is.na(params$from) && is.na(params$to)) if (
stop('specify "from" only together with "to"') (!is.na(params$from) && is.na(params$to)) ||
(!is.na(params$to) && is.na(params$from))
) stop('specify "from" only together with "to"')
if ( if (
(!is.null(params$bbox) && !is.null(params$boxes)) || (!is.null(params$bbox) && !is.null(params$boxes)) ||
(is.null(params$bbox) && is.null(params$boxes)) (is.null(params$bbox) && is.null(params$boxes))
) stop('Specify either "bbox" or "boxes"') ) stop('Specify either "bbox" or "boxes"')
query = list(endpoint = params$endpoint, phenomenon = params$phenomenon) query = list(
endpoint = params$endpoint,
phenomenon = params$phenomenon,
progress = params$progress,
cache = params$cache
)
if (!is.null(params$boxes)) query$boxIds = paste(params$boxes$X_id, collapse = ',') if (!is.null(params$boxes)) query$boxId = paste(params$boxes$X_id, collapse = ',')
if (!is.null(params$bbox)) query$bbox = paste(params$bbox, collapse = ',') if (!is.null(params$bbox)) query$bbox = paste(params$bbox, collapse = ',')
if (!is.na(params$from) && !is.na(params$to)) { if (!is.na(params$from) && !is.na(params$to)) {
parse_dateparams(params$from, params$to) # only for validation sideeffect parse_dateparams(params$from, params$to) # only for validation sideeffect
query$`from-date` = utc_date(params$from) query$`from-date` = date_as_utc(params$from)
query$`to-date` = utc_date(params$to) query$`to-date` = date_as_utc(params$to)
} }
if (!is.na(params$exposure)) query$exposure = params$exposure if (!is.na(params$exposure)) query$exposure = params$exposure
if (!is.na(params$columns)) if (!any(is.na(params$columns)))
query$columns = paste(params$columns, collapse = ',') query$columns = paste(params$columns, collapse = ',')
else else
query$columns = 'value,createdAt,lon,lat,sensorId,unit' query$columns = 'value,createdAt,lon,lat,sensorId,unit'
@ -140,8 +201,8 @@ paged_measurements_req = function (query) {
# auto paging: make a request for one 31day interval each (max supprted length) # auto paging: make a request for one 31day interval each (max supprted length)
# generate a list 31day intervals # generate a list 31day intervals
from = query$from from = query$`from-date`
to = query$to to = query$`to-date`
dates = list() dates = list()
while (from < to) { while (from < to) {
in31days = from + as.difftime(31, units = 'days') in31days = from + as.difftime(31, units = 'days')
@ -150,13 +211,19 @@ paged_measurements_req = function (query) {
} }
# use the dates as pages for multiple requests # use the dates as pages for multiple requests
lapply(dates, function(page) { df = lapply(dates, function(page) {
query$`from-date` = date_as_isostring(page$from) query$`from-date` = date_as_isostring(page$from)
query$`to-date` = date_as_isostring(page$to) query$`to-date` = date_as_isostring(page$to)
res = do.call(get_measurements_, query) res = do.call(get_measurements_, query)
cat(paste(query$`from-date`, query$`to-date`, sep = ' - '))
cat('\n') if (query$progress && !is_non_interactive())
cat(paste(query$`from-date`, query$`to-date`, sep = ' - '), '\n')
res res
}) %>% }) %>%
dplyr::bind_rows() dplyr::bind_rows()
# coerce all character columns (sensorId, unit, ...) to factors AFTER binding
df[sapply(df, is.character)] = lapply(df[sapply(df, is.character)], as.factor)
df
} }

View file

@ -1,8 +1,25 @@
#' @export #' @export
plot.osem_measurements = function (x, ...) { plot.osem_measurements = function (x, ..., mar = c(2, 4, 1, 1)) {
oldpar = par() oldpar <- par(no.readonly = TRUE)
par(mar = c(2,4,1,1)) on.exit(par(oldpar))
par(mar = mar)
plot(value~createdAt, x, col = factor(x$sensorId), xlab = NA, ylab = x$unit[1], ...) plot(value~createdAt, x, col = factor(x$sensorId), xlab = NA, ylab = x$unit[1], ...)
par(mar = oldpar$mar)
invisible(x) invisible(x)
} }
#' @export
print.osem_measurements = function (x, ...) {
print.data.frame(x, ...)
invisible(x)
}
#' Converts a foreign object to an osem_measurements data.frame.
#' @param x A data.frame to attach the class to.
#' Should have at least a `value` and `createdAt` column.
#' @return data.frame of class \code{osem_measurements}
#' @export
osem_as_measurements = function(x) {
ret = tibble::as_tibble(x)
class(ret) = c('osem_measurements', class(ret))
ret
}

View file

@ -1,29 +1,112 @@
#' opensensmapr: Work with sensor data from opensensemap.org #' opensensmapr: Get sensor data from opensensemap.org
#' #'
#' The opensensmapr package provides three categories functions: #' The opensensmapr package provides functions for
#' \enumerate{ #' \itemize{
#' \item retrieval of senseBoxes #' \item retrieval of senseBox metadata,
#' \item retrieval of measurements #' \item retrieval of senseBox measurements,
#' \item general stats about the openSenseMap database #' \item general statistics about the openSenseMap database.
#' } #' }
#' Additionally, helper functions are provided to ease the integration with the
#' \code{\link[sf]{sf}} package for spatial analysis as well as
#' \code{\link[dplyr]{dplyr}} for general data handling.
#' #'
#' @section Retrieving senseBox metadata: #' @section Retrieving senseBox metadata:
#' TODO #' On the openSenseMap, measurements are provided by sensors which are assigned
#' to a sensor station ("senseBox").
#' A senseBox consists of a collection of sensors, a location (-history), an ID,
#' as well as metadata about its owner & placement.
#' senseBoxes can be retrieved either by ID, or as a collection with optional
#' filters on their metadata
#' \itemize{
#' \item \code{\link{osem_box}}: Get metadata about a single box by its ID.
#' \item \code{\link{osem_boxes}}: Get metadata about all boxes, optionally
#' filtered by their attributes.
#' }
#'
#' The data is returned as a \code{\link{data.frame}} with the class
#' \code{sensebox} attached.
#' To help in getting an overview of the dataset additional functions are
#' implemented:
#' \itemize{
#' \item \code{summary.sensebox()}: Aggregate the metadata about the given
#' list of senseBoxes.
#' \item \code{plot.sensebox()}: Shows the spatial distribution of the given
#' list of senseBoxes on a map. Requires additional packages!
#' \item \code{\link{osem_phenomena}}: Get a named list with
#' counts of the measured phenomena of the given list of senseBoxes.
#' }
#' #'
#' @section Retrieving measurements: #' @section Retrieving measurements:
#' TODO #' There are two ways to retrieve measurements:
#' \itemize{
#' \item \code{\link{osem_measurements_archive}}:
#' Downloads measurements for a \emph{single box} from the openSenseMap archive.
#' This function does not provide realtime data, but is suitable for long time frames.
#'
#' \item \code{\link{osem_measurements}}:
#' This function retrieves (realtime) measurements from the API. It works for a
#' \emph{single phenomenon} only, but provides various filters to select sensors by
#'
#' \itemize{
#' \item a list of senseBoxes, previously retrieved through
#' \code{\link{osem_box}} or \code{\link{osem_boxes}}.
#' \item a geographic bounding box, which can be generated with the
#' \code{\link[sf]{sf}} package.
#' \item a time frame
#' \item a exposure type of the given box
#' }
#'
#' Use this function with caution for long time frames, as the API becomes
#' quite slow is limited to 10.000 measurements per 30 day interval.
#' }
#'
#' Data is returned as \code{tibble} with the class \code{osem_measurements}.
#' #'
#' @section Retrieving statistics: #' @section Retrieving statistics:
#' TODO #' Count statistics about the database are provided with \code{\link{osem_counts}}.
#' #'
#' @section Working with spatial data from openSenseMap: #' @section Using a different API instance / endpoint:
#' TODO #' You can override the functions \code{osem_endpoint} and \code{osem_endpoint_archive}
#' inside the package namespace:
#'
#' \code{
#' assignInNamespace("osem_endpoint", function() "http://mynewosem.org", "opensensmapr")
#' }
#'
#' @section Integration with other packages:
#' The package aims to be compatible with the tidyverse.
#' Helpers are implemented to ease the further usage of the retrieved data:
#' #'
#' \itemize{
#' \item \code{\link{osem_as_sensebox}} & \code{\link{osem_as_measurements}}:
#' Transform a foreign object to a sensebox data.frame or osem_measurements
#' by attaching the required classes and attributes.
#' \item \code{\link{st_as_sf.sensebox}} & \code{\link{st_as_sf.osem_measurements}}:
#' Transform the senseBoxes or measurements into an \code{\link[sf]{sf}}
#' compatible format for spatial analysis.
#' \item \code{filter.sensebox()} & \code{mutate.sensebox()}: for use with
#' \code{\link{dplyr}}.
#' }
#'
#' @seealso Report bugs at \url{https://github.com/sensebox/opensensmapR/issues}
#' @seealso openSenseMap API: \url{https://api.opensensemap.org/}
#' @seealso official openSenseMap API documentation: \url{https://docs.opensensemap.org/}
#' @docType package #' @docType package
#' @name opensensmapr #' @name opensensmapr
'_PACKAGE' '_PACKAGE'
#' @importFrom graphics plot #' @importFrom graphics plot legend par
#' @importFrom magrittr %>% #' @importFrom magrittr %>%
`%>%` = magrittr::`%>%` `%>%` = magrittr::`%>%`
# just to make R CMD check happy, due to NSE (dplyr) functions
globalVariables(c(
'createdAt',
'lastMeasurement',
'sensorType',
'title',
'unit',
'value',
'X_id',
'.'
))

View file

@ -18,19 +18,20 @@ osem_phenomena = function (boxes) UseMethod('osem_phenomena')
#' # get the phenomena for a single senseBox #' # get the phenomena for a single senseBox
#' osem_phenomena(osem_box('593bcd656ccf3b0011791f5a')) #' osem_phenomena(osem_box('593bcd656ccf3b0011791f5a'))
#' #'
#' # get the phenomena for a group of senseBoxes #' \donttest{
#' osem_phenomena( #' # get the phenomena for a group of senseBoxes
#' osem_boxes(grouptag = 'ifgi', exposure = 'outdoor', date = Sys.time()) #' osem_phenomena(
#' ) #' osem_boxes(grouptag = 'ifgi', exposure = 'outdoor', date = Sys.time())
#' #' )
#' # get phenomena with at least 10 sensors on opensensemap
#' phenoms = osem_phenomena(osem_boxes())
#' names(phenoms[phenoms > 9])
#' #'
#' # get phenomena with at least 30 sensors on opensensemap
#' phenoms = osem_phenomena(osem_boxes())
#' names(phenoms[phenoms > 29])
#' }
osem_phenomena.sensebox = function (boxes) { osem_phenomena.sensebox = function (boxes) {
p = Reduce(`c`, boxes$phenomena) %>% # get all the row contents in a single vector p = Reduce(`c`, boxes$phenomena) %>% # get all the row contents in a single vector
table() %>% # get count for each phenomenon table() %>% # get count for each phenomenon
as.list() as.list()
p[order(unlist(p), decreasing = T)] p[order(unlist(p), decreasing = TRUE)]
} }

View file

@ -1,30 +0,0 @@
# ==============================================================================
#
#' Convert a \code{sensebox} or \code{osem_measurements} dataframe to an
#' \code{\link[sf]{st_sf}} object.
#'
#' @param x The object to convert
#' @param ... maybe more objects to convert
#' @return The object with an st_geometry column attached.
#' @export
osem_as_sf = function (x, ...) {
sf::st_as_sf(x, ..., coords = c('lon', 'lat'), crs = 4326)
}
# parses from/to params for get_measurements_ and get_boxes_
parse_dateparams = function (from, to) {
from = utc_date(from)
to = utc_date(to)
if (from - to > 0) stop('"from" must be earlier than "to"')
c(date_as_isostring(from), date_as_isostring(to))
}
# NOTE: cannot handle mixed vectors of POSIXlt and POSIXct
utc_date = function (date) {
time = as.POSIXct(date)
attr(time, 'tzone') = 'UTC'
time
}
# NOTE: cannot handle mixed vectors of POSIXlt and POSIXct
date_as_isostring = function (date) format.Date(date, format = '%FT%TZ')

129
README.md
View file

@ -1,51 +1,120 @@
# opensensmapr # opensensmapr
This R package ingests data (environmental measurements, sensor stations) from
the API of opensensemap.org for analysis in R.
The package aims to be compatible with sf and the tidyverse.
> **Whats up with that package name?** idk, the R people seem to [enjoy][1] [![CRAN status](https://www.r-pkg.org/badges/version/opensensmapr)](https://cran.r-project.org/package=opensensmapr)
[dropping][2] [vovels][3] so.. Unfortunately I couldn't fit the naming [![Travis build status](https://travis-ci.org/sensebox/opensensmapR.svg?branch=master)](https://travis-ci.org/sensebox/opensensmapR)
convention to drop an `y` in there. [![AppVeyor Build status](https://ci.appveyor.com/api/projects/status/dtad4ves48gebss7/branch/master?svg=true)](https://ci.appveyor.com/project/noerw/opensensmapr/branch/master)
[1]: https://github.com/tidyverse/readr This R package ingests data from the API of [opensensemap.org][osem] for analysis in R.
[2]: https://github.com/tidyverse/dplyr
[3]: https://github.com/tidyverse/tidyr Features include:
- `osem_boxes()`: fetch sensor station ("box") metadata, with various filters
- `osem_measurements()`: fetch measurements by phenomenon, with various filters such as submitting spatial extent, time range, sensor type, box, exposure..
- no time frame limitation through request paging!
- many helper functions to help understand the queried data
- caching queries for reproducibility
The package aims to be compatible with the [`tidyverse`][tidy] and [`sf`][sf],
so it is easy to analyze or vizualize the data with state of the art packages.
[osem]: https://opensensemap.org/
[sf]: https://github.com/r-spatial/sf
[tidy]: https://www.tidyverse.org/
## Usage
Complete documentation is provided via the R help system:
Each function's documentation can be viewed with `?<function-name>`.
A comprehensive overview of all functions is given in `?opensensmapr`.
There are also vignettes showcasing applications of this package:
- [Visualising the History of openSenseMap.org][osem-history]: Showcase of `opensensmapr` with `dplyr` + `ggplot2`
- [Exploring the openSenseMap dataset][osem-intro]: Showcase of included helper functions
- [Caching openSenseMap Data for reproducibility][osem-serialization]
[osem-intro]: https://sensebox.github.io/opensensmapR/inst/doc/osem-intro.html
[osem-history]: https://sensebox.github.io/opensensmapR/inst/doc/osem-history.html
[osem-serialization]: https://sensebox.github.io/opensensmapR/inst/doc/osem-serialization.html
If you used this package for an analysis and think it could serve as a good
example or showcase, feel free to add a vignette to the package via a [PR](#contribute)!
## Installation ## Installation
Right now, the package is not on CRAN. To install it from GitHub, run: The package is available on CRAN, install it via
```r
install.packages('opensensmapr')
```
To install the very latest versions from GitHub, run:
```r ```r
install.packages('devtools') install.packages('devtools')
devtools::install_github('noerw/opensensmapr') devtools::install_github('sensebox/opensensmapr@master') # latest stable version
devtools::install_github('sensebox/opensensmapr@development') # bleeding edge version
``` ```
## Usage ## Changelog
A usage example is shown in the vignette [`osem-intro`](vignettes/osem-intro.Rmd).
In general these are the main functions for data retrieval:
```r This project adheres to semantic versioning, for changes in recent versions please consult [NEWS.md](NEWS.md).
# retrieve a single box by id, or many boxes by some property-filters
b = osem_box('boxId')
b = osem_boxes(filter1, filter2, ...)
# get the counts of observed phenomena for a list of boxes ## Contributing & Development
p = osem_phenomena(b)
# get measurements for a phenomenon Contributions are very welcome!
m = osem_measurements(phenomenon, filter1, ...) When submitting a patch, please follow the existing code stlye,
# get measurements for a phenomenon from selected boxes and run `R CMD check --no-vignettes .` on the package.
m = osem_measurements(b, phenomenon, filter1, ...) Where feasible, also add tests for the added / changed functionality in `tests/testthat`.
# get measurements for a phenomenon from a geographic bounding box
m = osem_measurements(bbox, phenomenon, filter1, ...)
# get general count statistics of the openSenseMap database Please note that this project is released with a Contributor Code of Conduct.
osem_counts() By participating in this project you agree to abide by its terms.
### development environment
To set up the development environment for testing and checking, all suggested packages should be installed.
On linux, these require some system dependencies:
```sh
# install dependencies for sf (see https://github.com/r-spatial/sf#installing)
sudo dnf install gdal-devel proj-devel proj-epsg proj-nad geos-devel udunits2-devel
# install suggested packages
R -e "install.packages(c('maps', 'maptools', 'tibble', 'rgeos', 'sf',
'knitr', 'rmarkdown', 'lubridate', 'units', 'jsonlite', 'ggplot2',
'zoo', 'lintr', 'testthat', 'covr')"
``` ```
Additionally there are some helpers: `summary.sensebox(), plot.sensebox(), osem_as_sf()...`. ### build
To build the package, either use `devtools::build()` or run
```sh
R CMD build .
```
Next, run the **tests and checks**:
```sh
R CMD check --as-cran ../opensensmapr_*.tar.gz
# alternatively, if you're in a hurry:
R CMD check --no-vignettes ../opensensmapr_*.tar.gz
```
### release
To create a release:
0. make sure you are on master branch
1. run the tests and checks as described above
2. bump the version in `DESCRIPTION`
3. update `NEWS.md`
3. rebuild the documentation: `R -e 'devtools::document()'`
4. build the package again with the new version: `R CMD build . --no-build-vignettes`
5. tag the commit with the new version: `git tag v0.5.0`
6. push changes: `git push && git push --tags`
7. wait for *all* CI tests to complete successfully (helps in the next step)
8. [upload the new release to CRAN](https://cran.r-project.org/submit.html)
9. get back to the enjoyable parts of your life & hope you won't get bad mail next week.
For parameter options, open each functions' documentation by calling `?<function-name>`.
## License ## License
GPL-2.0 - Norwin Roosen GPL-2.0 - Norwin Roosen

45
appveyor.yml Normal file
View file

@ -0,0 +1,45 @@
# DO NOT CHANGE the "init" and "install" sections below
# Download script file from GitHub
init:
ps: |
$ErrorActionPreference = "Stop"
Invoke-WebRequest http://raw.github.com/krlmlr/r-appveyor/master/scripts/appveyor-tool.ps1 -OutFile "..\appveyor-tool.ps1"
Import-Module '..\appveyor-tool.ps1'
install:
ps: Bootstrap
cache:
- C:\RLibrary
# Adapt as necessary starting from here
build_script:
- travis-tool.sh install_deps
test_script:
- travis-tool.sh run_tests
on_failure:
- 7z a failure.zip *.Rcheck\*
- appveyor PushArtifact failure.zip
artifacts:
- path: '*.Rcheck\**\*.log'
name: Logs
- path: '*.Rcheck\**\*.out'
name: Logs
- path: '*.Rcheck\**\*.fail'
name: Logs
- path: '*.Rcheck\**\*.Rout'
name: Logs
- path: '\*_*.tar.gz'
name: Bits
- path: '\*_*.zip'
name: Bits

12
codecov.yml Normal file
View file

@ -0,0 +1,12 @@
comment: false
coverage:
status:
project:
default:
target: auto
threshold: 1%
patch:
default:
target: auto
threshold: 1%

136
inst/doc/osem-history.R Normal file
View file

@ -0,0 +1,136 @@
## ----setup, results='hide', message=FALSE, warning=FALSE----------------------
# required packages:
library(opensensmapr) # data download
library(dplyr) # data wrangling
library(ggplot2) # plotting
library(lubridate) # date arithmetic
library(zoo) # rollmean()
## ----download-----------------------------------------------------------------
# if you want to see results for a specific subset of boxes,
# just specify a filter such as grouptag='ifgi' here
# boxes = osem_boxes(cache = '.')
boxes = readRDS('boxes_precomputed.rds') # read precomputed file to save resources
## ----exposure_counts, message=FALSE-------------------------------------------
exposure_counts = boxes %>%
group_by(exposure) %>%
mutate(count = row_number(createdAt))
exposure_colors = c(indoor = 'red', outdoor = 'lightgreen', mobile = 'blue', unknown = 'darkgrey')
ggplot(exposure_counts, aes(x = createdAt, y = count, colour = exposure)) +
geom_line() +
scale_colour_manual(values = exposure_colors) +
xlab('Registration Date') + ylab('senseBox count')
## ----exposure_summary---------------------------------------------------------
exposure_counts %>%
summarise(
oldest = min(createdAt),
newest = max(createdAt),
count = max(count)
) %>%
arrange(desc(count))
## ----grouptag_counts, message=FALSE-------------------------------------------
grouptag_counts = boxes %>%
group_by(grouptag) %>%
# only include grouptags with 8 or more members
filter(length(grouptag) >= 8 & !is.na(grouptag)) %>%
mutate(count = row_number(createdAt))
# helper for sorting the grouptags by boxcount
sortLvls = function(oldFactor, ascending = TRUE) {
lvls = table(oldFactor) %>% sort(., decreasing = !ascending) %>% names()
factor(oldFactor, levels = lvls)
}
grouptag_counts$grouptag = sortLvls(grouptag_counts$grouptag, ascending = FALSE)
ggplot(grouptag_counts, aes(x = createdAt, y = count, colour = grouptag)) +
geom_line(aes(group = grouptag)) +
xlab('Registration Date') + ylab('senseBox count')
## ----grouptag_summary---------------------------------------------------------
grouptag_counts %>%
summarise(
oldest = min(createdAt),
newest = max(createdAt),
count = max(count)
) %>%
arrange(desc(count))
## ----growthrate_registered, warning=FALSE, message=FALSE, results='hide'------
bins = 'week'
mvavg_bins = 6
growth = boxes %>%
mutate(week = cut(as.Date(createdAt), breaks = bins)) %>%
group_by(week) %>%
summarize(count = length(week)) %>%
mutate(event = 'registered')
## ----growthrate_inactive, warning=FALSE, message=FALSE, results='hide'--------
inactive = boxes %>%
# remove boxes that were updated in the last two days,
# b/c any box becomes inactive at some point by definition of updatedAt
filter(updatedAt < now() - days(2)) %>%
mutate(week = cut(as.Date(updatedAt), breaks = bins)) %>%
group_by(week) %>%
summarize(count = length(week)) %>%
mutate(event = 'inactive')
## ----growthrate, warning=FALSE, message=FALSE, results='hide'-----------------
boxes_by_date = bind_rows(growth, inactive) %>% group_by(event)
ggplot(boxes_by_date, aes(x = as.Date(week), colour = event)) +
xlab('Time') + ylab(paste('rate per ', bins)) +
scale_x_date(date_breaks="years", date_labels="%Y") +
scale_colour_manual(values = c(registered = 'lightgreen', inactive = 'grey')) +
geom_point(aes(y = count), size = 0.5) +
# moving average, make first and last value NA (to ensure identical length of vectors)
geom_line(aes(y = rollmean(count, mvavg_bins, fill = list(NA, NULL, NA))))
## ----exposure_duration, message=FALSE-----------------------------------------
duration = boxes %>%
group_by(exposure) %>%
filter(!is.na(updatedAt)) %>%
mutate(duration = difftime(updatedAt, createdAt, units='days'))
ggplot(duration, aes(x = exposure, y = duration)) +
geom_boxplot() +
coord_flip() + ylab('Duration active in Days')
## ----grouptag_duration, message=FALSE-----------------------------------------
duration = boxes %>%
group_by(grouptag) %>%
# only include grouptags with 8 or more members
filter(length(grouptag) >= 8 & !is.na(grouptag) & !is.na(updatedAt)) %>%
mutate(duration = difftime(updatedAt, createdAt, units='days'))
ggplot(duration, aes(x = grouptag, y = duration)) +
geom_boxplot() +
coord_flip() + ylab('Duration active in Days')
duration %>%
summarize(
duration_avg = round(mean(duration)),
duration_min = round(min(duration)),
duration_max = round(max(duration)),
oldest_box = round(max(difftime(now(), createdAt, units='days')))
) %>%
arrange(desc(duration_avg))
## ----year_duration, message=FALSE---------------------------------------------
# NOTE: boxes older than 2016 missing due to missing updatedAt in database
duration = boxes %>%
mutate(year = cut(as.Date(createdAt), breaks = 'year')) %>%
group_by(year) %>%
filter(!is.na(updatedAt)) %>%
mutate(duration = difftime(updatedAt, createdAt, units='days'))
ggplot(duration, aes(x = substr(as.character(year), 0, 4), y = duration)) +
geom_boxplot() +
coord_flip() + ylab('Duration active in Days') + xlab('Year of Registration')

246
inst/doc/osem-history.Rmd Normal file
View file

@ -0,0 +1,246 @@
---
title: "Visualising the History of openSenseMap.org"
author: "Norwin Roosen"
date: '`r Sys.Date()`'
output:
rmarkdown::html_vignette:
df_print: kable
fig_height: 5
fig_width: 7
toc: yes
html_document:
code_folding: hide
df_print: kable
theme: lumen
toc: yes
toc_float: yes
vignette: >
%\VignetteIndexEntry{Visualising the History of openSenseMap.org}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
> This vignette serves as an example on data wrangling & visualization with
`opensensmapr`, `dplyr` and `ggplot2`.
```{r setup, results='hide', message=FALSE, warning=FALSE}
# required packages:
library(opensensmapr) # data download
library(dplyr) # data wrangling
library(ggplot2) # plotting
library(lubridate) # date arithmetic
library(zoo) # rollmean()
```
openSenseMap.org has grown quite a bit in the last years; it would be interesting
to see how we got to the current `r osem_counts()$boxes` sensor stations,
split up by various attributes of the boxes.
While `opensensmapr` provides extensive methods of filtering boxes by attributes
on the server, we do the filtering within R to save time and gain flexibility.
So the first step is to retrieve *all the boxes*:
```{r download}
# if you want to see results for a specific subset of boxes,
# just specify a filter such as grouptag='ifgi' here
# boxes = osem_boxes(cache = '.')
boxes = readRDS('boxes_precomputed.rds') # read precomputed file to save resources
```
# Plot count of boxes by time {.tabset}
By looking at the `createdAt` attribute of each box we know the exact time a box
was registered.
With this approach we have no information about boxes that were deleted in the
meantime, but that's okay for now.
## ...and exposure
```{r exposure_counts, message=FALSE}
exposure_counts = boxes %>%
group_by(exposure) %>%
mutate(count = row_number(createdAt))
exposure_colors = c(indoor = 'red', outdoor = 'lightgreen', mobile = 'blue', unknown = 'darkgrey')
ggplot(exposure_counts, aes(x = createdAt, y = count, colour = exposure)) +
geom_line() +
scale_colour_manual(values = exposure_colors) +
xlab('Registration Date') + ylab('senseBox count')
```
Outdoor boxes are growing *fast*!
We can also see the introduction of `mobile` sensor "stations" in 2017. While
mobile boxes are still few, we can expect a quick rise in 2018 once the new
senseBox MCU with GPS support is released.
Let's have a quick summary:
```{r exposure_summary}
exposure_counts %>%
summarise(
oldest = min(createdAt),
newest = max(createdAt),
count = max(count)
) %>%
arrange(desc(count))
```
## ...and grouptag
We can try to find out where the increases in growth came from, by analysing the
box count by grouptag.
Caveats: Only a small subset of boxes has a grouptag, and we should assume
that these groups are actually bigger. Also, we can see that grouptag naming is
inconsistent (`Luftdaten`, `luftdaten.info`, ...)
```{r grouptag_counts, message=FALSE}
grouptag_counts = boxes %>%
group_by(grouptag) %>%
# only include grouptags with 8 or more members
filter(length(grouptag) >= 8 & !is.na(grouptag)) %>%
mutate(count = row_number(createdAt))
# helper for sorting the grouptags by boxcount
sortLvls = function(oldFactor, ascending = TRUE) {
lvls = table(oldFactor) %>% sort(., decreasing = !ascending) %>% names()
factor(oldFactor, levels = lvls)
}
grouptag_counts$grouptag = sortLvls(grouptag_counts$grouptag, ascending = FALSE)
ggplot(grouptag_counts, aes(x = createdAt, y = count, colour = grouptag)) +
geom_line(aes(group = grouptag)) +
xlab('Registration Date') + ylab('senseBox count')
```
```{r grouptag_summary}
grouptag_counts %>%
summarise(
oldest = min(createdAt),
newest = max(createdAt),
count = max(count)
) %>%
arrange(desc(count))
```
# Plot rate of growth and inactivity per week
First we group the boxes by `createdAt` into bins of one week:
```{r growthrate_registered, warning=FALSE, message=FALSE, results='hide'}
bins = 'week'
mvavg_bins = 6
growth = boxes %>%
mutate(week = cut(as.Date(createdAt), breaks = bins)) %>%
group_by(week) %>%
summarize(count = length(week)) %>%
mutate(event = 'registered')
```
We can do the same for `updatedAt`, which informs us about the last change to
a box, including uploaded measurements.
This method of determining inactive boxes is fairly inaccurate and should be
considered an approximation, because we have no information about intermediate
inactive phases.
Also deleted boxes would probably have a big impact here.
```{r growthrate_inactive, warning=FALSE, message=FALSE, results='hide'}
inactive = boxes %>%
# remove boxes that were updated in the last two days,
# b/c any box becomes inactive at some point by definition of updatedAt
filter(updatedAt < now() - days(2)) %>%
mutate(week = cut(as.Date(updatedAt), breaks = bins)) %>%
group_by(week) %>%
summarize(count = length(week)) %>%
mutate(event = 'inactive')
```
Now we can combine both datasets for plotting:
```{r growthrate, warning=FALSE, message=FALSE, results='hide'}
boxes_by_date = bind_rows(growth, inactive) %>% group_by(event)
ggplot(boxes_by_date, aes(x = as.Date(week), colour = event)) +
xlab('Time') + ylab(paste('rate per ', bins)) +
scale_x_date(date_breaks="years", date_labels="%Y") +
scale_colour_manual(values = c(registered = 'lightgreen', inactive = 'grey')) +
geom_point(aes(y = count), size = 0.5) +
# moving average, make first and last value NA (to ensure identical length of vectors)
geom_line(aes(y = rollmean(count, mvavg_bins, fill = list(NA, NULL, NA))))
```
We see a sudden rise in early 2017, which lines up with the fast growing grouptag `Luftdaten`.
This was enabled by an integration of openSenseMap.org into the firmware of the
air quality monitoring project [luftdaten.info](https://sensor.community/de/).
The dips in mid 2017 and early 2018 could possibly be explained by production/delivery issues
of the senseBox hardware, but I have no data on the exact time frames to verify.
# Plot duration of boxes being active {.tabset}
While we are looking at `createdAt` and `updatedAt`, we can also extract the duration of activity
of each box, and look at metrics by exposure and grouptag once more:
## ...by exposure
```{r exposure_duration, message=FALSE}
duration = boxes %>%
group_by(exposure) %>%
filter(!is.na(updatedAt)) %>%
mutate(duration = difftime(updatedAt, createdAt, units='days'))
ggplot(duration, aes(x = exposure, y = duration)) +
geom_boxplot() +
coord_flip() + ylab('Duration active in Days')
```
The time of activity averages at only `r round(mean(duration$duration))` days,
though there are boxes with `r round(max(duration$duration))` days of activity,
spanning a large chunk of openSenseMap's existence.
## ...by grouptag
```{r grouptag_duration, message=FALSE}
duration = boxes %>%
group_by(grouptag) %>%
# only include grouptags with 8 or more members
filter(length(grouptag) >= 8 & !is.na(grouptag) & !is.na(updatedAt)) %>%
mutate(duration = difftime(updatedAt, createdAt, units='days'))
ggplot(duration, aes(x = grouptag, y = duration)) +
geom_boxplot() +
coord_flip() + ylab('Duration active in Days')
duration %>%
summarize(
duration_avg = round(mean(duration)),
duration_min = round(min(duration)),
duration_max = round(max(duration)),
oldest_box = round(max(difftime(now(), createdAt, units='days')))
) %>%
arrange(desc(duration_avg))
```
The time of activity averages at only `r round(mean(duration$duration))` days,
though there are boxes with `r round(max(duration$duration))` days of activity,
spanning a large chunk of openSenseMap's existence.
## ...by year of registration
This is less useful, as older boxes are active for a longer time by definition.
If you have an idea how to compensate for that, please send a [Pull Request][PR]!
```{r year_duration, message=FALSE}
# NOTE: boxes older than 2016 missing due to missing updatedAt in database
duration = boxes %>%
mutate(year = cut(as.Date(createdAt), breaks = 'year')) %>%
group_by(year) %>%
filter(!is.na(updatedAt)) %>%
mutate(duration = difftime(updatedAt, createdAt, units='days'))
ggplot(duration, aes(x = substr(as.character(year), 0, 4), y = duration)) +
geom_boxplot() +
coord_flip() + ylab('Duration active in Days') + xlab('Year of Registration')
```
# More Visualisations
Other visualisations come to mind, and are left as an exercise to the reader.
If you implemented some, feel free to add them to this vignette via a [Pull Request][PR].
* growth by phenomenon
* growth by location -> (interactive) map
* set inactive rate in relation to total box count
* filter timespans with big dips in growth rate, and extrapolate the amount of
senseBoxes that could be on the platform today, assuming there were no production issues ;)
[PR]: https://github.com/sensebox/opensensmapr/pulls

1820
inst/doc/osem-history.html Normal file

File diff suppressed because one or more lines are too long

View file

@ -0,0 +1,159 @@
## ----setup, results='hide', message=FALSE, warning=FALSE----------------------
# required packages:
library(opensensmapr) # data download
library(dplyr) # data wrangling
library(ggplot2) # plotting
library(lubridate) # date arithmetic
library(zoo) # rollmean()
## ----download, results='hide', message=FALSE, warning=FALSE-------------------
# if you want to see results for a specific subset of boxes,
# just specify a filter such as grouptag='ifgi' here
# boxes = osem_boxes(cache = '.')
boxes = readRDS('boxes_precomputed.rds') # read precomputed file to save resources
## -----------------------------------------------------------------------------
boxes = filter(boxes, locationtimestamp >= "2022-01-01" & locationtimestamp <="2022-12-31")
summary(boxes) -> summary.data.frame
## ---- message=FALSE, warning=FALSE--------------------------------------------
plot(boxes)
## -----------------------------------------------------------------------------
phenoms = osem_phenomena(boxes)
str(phenoms)
## -----------------------------------------------------------------------------
phenoms[phenoms > 50]
## ----exposure_counts, message=FALSE-------------------------------------------
exposure_counts = boxes %>%
group_by(exposure) %>%
mutate(count = row_number(locationtimestamp))
exposure_colors = c(indoor = 'red', outdoor = 'lightgreen', mobile = 'blue', unknown = 'darkgrey')
ggplot(exposure_counts, aes(x = locationtimestamp, y = count, colour = exposure)) +
geom_line() +
scale_colour_manual(values = exposure_colors) +
xlab('Registration Date') + ylab('senseBox count')
## ----exposure_summary---------------------------------------------------------
exposure_counts %>%
summarise(
oldest = min(locationtimestamp),
newest = max(locationtimestamp),
count = max(count)
) %>%
arrange(desc(count))
## ----grouptag_counts, message=FALSE-------------------------------------------
grouptag_counts = boxes %>%
group_by(grouptag) %>%
# only include grouptags with 15 or more members
filter(length(grouptag) >= 15 & !is.na(grouptag) & grouptag != '') %>%
mutate(count = row_number(locationtimestamp))
# helper for sorting the grouptags by boxcount
sortLvls = function(oldFactor, ascending = TRUE) {
lvls = table(oldFactor) %>% sort(., decreasing = !ascending) %>% names()
factor(oldFactor, levels = lvls)
}
grouptag_counts$grouptag = sortLvls(grouptag_counts$grouptag, ascending = FALSE)
ggplot(grouptag_counts, aes(x = locationtimestamp, y = count, colour = grouptag)) +
geom_line(aes(group = grouptag)) +
xlab('Registration Date') + ylab('senseBox count')
## ----grouptag_summary---------------------------------------------------------
grouptag_counts %>%
summarise(
oldest = min(locationtimestamp),
newest = max(locationtimestamp),
count = max(count)
) %>%
arrange(desc(count))
## ----growthrate_registered, warning=FALSE, message=FALSE, results='hide'------
bins = 'week'
mvavg_bins = 6
growth = boxes %>%
mutate(week = cut(as.Date(locationtimestamp), breaks = bins)) %>%
group_by(week) %>%
summarize(count = length(week)) %>%
mutate(event = 'registered')
## ----growthrate_inactive, warning=FALSE, message=FALSE, results='hide'--------
inactive = boxes %>%
# remove boxes that were updated in the last two days,
# b/c any box becomes inactive at some point by definition of updatedAt
filter(lastMeasurement < now() - days(2)) %>%
mutate(week = cut(as.Date(lastMeasurement), breaks = bins)) %>%
filter(as.Date(week) > as.Date("2021-12-31")) %>%
group_by(week) %>%
summarize(count = length(week)) %>%
mutate(event = 'inactive')
## ----growthrate, warning=FALSE, message=FALSE, results='hide'-----------------
boxes_by_date = bind_rows(growth, inactive) %>% group_by(event)
ggplot(boxes_by_date, aes(x = as.Date(week), colour = event)) +
xlab('Time') + ylab(paste('rate per ', bins)) +
scale_x_date(date_breaks="years", date_labels="%Y") +
scale_colour_manual(values = c(registered = 'lightgreen', inactive = 'grey')) +
geom_point(aes(y = count), size = 0.5) +
# moving average, make first and last value NA (to ensure identical length of vectors)
geom_line(aes(y = rollmean(count, mvavg_bins, fill = list(NA, NULL, NA))))
## ----table_mostregistrations--------------------------------------------------
boxes_by_date %>%
filter(count > 50) %>%
arrange(desc(count))
## ----exposure_duration, message=FALSE-----------------------------------------
durations = boxes %>%
group_by(exposure) %>%
filter(!is.na(lastMeasurement)) %>%
mutate(duration = difftime(lastMeasurement, locationtimestamp, units='days')) %>%
filter(duration >= 0)
ggplot(durations, aes(x = exposure, y = duration)) +
geom_boxplot() +
coord_flip() + ylab('Duration active in Days')
## ----grouptag_duration, message=FALSE-----------------------------------------
durations = boxes %>%
filter(!is.na(lastMeasurement)) %>%
group_by(grouptag) %>%
# only include grouptags with 20 or more members
filter(length(grouptag) >= 15 & !is.na(grouptag) & !is.na(lastMeasurement)) %>%
mutate(duration = difftime(lastMeasurement, locationtimestamp, units='days')) %>%
filter(duration >= 0)
ggplot(durations, aes(x = grouptag, y = duration)) +
geom_boxplot() +
coord_flip() + ylab('Duration active in Days')
durations %>%
summarize(
duration_avg = round(mean(duration)),
duration_min = round(min(duration)),
duration_max = round(max(duration)),
oldest_box = round(max(difftime(now(), locationtimestamp, units='days')))
) %>%
arrange(desc(duration_avg))
## ----year_duration, message=FALSE---------------------------------------------
# NOTE: boxes older than 2016 missing due to missing updatedAt in database
duration = boxes %>%
mutate(year = cut(as.Date(locationtimestamp), breaks = 'year')) %>%
group_by(year) %>%
filter(!is.na(lastMeasurement)) %>%
mutate(duration = difftime(lastMeasurement, locationtimestamp, units='days')) %>%
filter(duration >= 0)
ggplot(duration, aes(x = substr(as.character(year), 0, 4), y = duration)) +
geom_boxplot() +
coord_flip() + ylab('Duration active in Days') + xlab('Year of Registration')

View file

@ -0,0 +1,297 @@
---
title: "Visualising the Development of openSenseMap.org in 2022"
author: "Jan Stenkamp"
date: '`r Sys.Date()`'
output:
html_document:
code_folding: hide
df_print: kable
theme: lumen
toc: yes
toc_float: yes
rmarkdown::html_vignette:
df_print: kable
fig_height: 5
fig_width: 7
toc: yes
vignette: >
%\VignetteIndexEntry{Visualising the Development of openSenseMap.org in 2022}
%\VignetteEncoding{UTF-8}
%\VignetteEngine{knitr::rmarkdown}
---
> This vignette serves as an example on data wrangling & visualization with
`opensensmapr`, `dplyr` and `ggplot2`.
```{r setup, results='hide', message=FALSE, warning=FALSE}
# required packages:
library(opensensmapr) # data download
library(dplyr) # data wrangling
library(ggplot2) # plotting
library(lubridate) # date arithmetic
library(zoo) # rollmean()
```
openSenseMap.org has grown quite a bit in the last years; it would be interesting
to see how we got to the current `r osem_counts()$boxes` sensor stations,
split up by various attributes of the boxes.
While `opensensmapr` provides extensive methods of filtering boxes by attributes
on the server, we do the filtering within R to save time and gain flexibility.
So the first step is to retrieve *all the boxes*.
```{r download, results='hide', message=FALSE, warning=FALSE}
# if you want to see results for a specific subset of boxes,
# just specify a filter such as grouptag='ifgi' here
# boxes = osem_boxes(cache = '.')
boxes = readRDS('boxes_precomputed.rds') # read precomputed file to save resources
```
# Introduction
In the following we just want to have a look at the boxes created in 2022, so we filter for them.
```{r}
boxes = filter(boxes, locationtimestamp >= "2022-01-01" & locationtimestamp <="2022-12-31")
summary(boxes) -> summary.data.frame
```
<!-- This gives a good overview already: As of writing this, there are more than 11,000 -->
<!-- sensor stations, of which ~30% are currently running. Most of them are placed -->
<!-- outdoors and have around 5 sensors each. -->
<!-- The oldest station is from August 2016, while the latest station was registered a -->
<!-- couple of minutes ago. -->
Another feature of interest is the spatial distribution of the boxes: `plot()`
can help us out here. This function requires a bunch of optional dependencies though.
```{r, message=FALSE, warning=FALSE}
plot(boxes)
```
But what do these sensor stations actually measure? Lets find out.
`osem_phenomena()` gives us a named list of of the counts of each observed
phenomenon for the given set of sensor stations:
```{r}
phenoms = osem_phenomena(boxes)
str(phenoms)
```
Thats quite some noise there, with many phenomena being measured by a single
sensor only, or many duplicated phenomena due to slightly different spellings.
We should clean that up, but for now let's just filter out the noise and find
those phenomena with high sensor numbers:
```{r}
phenoms[phenoms > 50]
```
# Plot count of boxes by time {.tabset}
By looking at the `createdAt` attribute of each box we know the exact time a box
was registered. Because of some database migration issues the `createdAt` values are mostly wrong (~80% of boxes created 2022-03-30), so we are using the `timestamp` attribute of the `currentlocation` which should in most cases correspond to the creation date.
With this approach we have no information about boxes that were deleted in the
meantime, but that's okay for now.
## ...and exposure
```{r exposure_counts, message=FALSE}
exposure_counts = boxes %>%
group_by(exposure) %>%
mutate(count = row_number(locationtimestamp))
exposure_colors = c(indoor = 'red', outdoor = 'lightgreen', mobile = 'blue', unknown = 'darkgrey')
ggplot(exposure_counts, aes(x = locationtimestamp, y = count, colour = exposure)) +
geom_line() +
scale_colour_manual(values = exposure_colors) +
xlab('Registration Date') + ylab('senseBox count')
```
Outdoor boxes are growing *fast*!
We can also see the introduction of `mobile` sensor "stations" in 2017.
Let's have a quick summary:
```{r exposure_summary}
exposure_counts %>%
summarise(
oldest = min(locationtimestamp),
newest = max(locationtimestamp),
count = max(count)
) %>%
arrange(desc(count))
```
## ...and grouptag
We can try to find out where the increases in growth came from, by analysing the
box count by grouptag.
Caveats: Only a small subset of boxes has a grouptag, and we should assume
that these groups are actually bigger. Also, we can see that grouptag naming is
inconsistent (`Luftdaten`, `luftdaten.info`, ...)
```{r grouptag_counts, message=FALSE}
grouptag_counts = boxes %>%
group_by(grouptag) %>%
# only include grouptags with 15 or more members
filter(length(grouptag) >= 15 & !is.na(grouptag) & grouptag != '') %>%
mutate(count = row_number(locationtimestamp))
# helper for sorting the grouptags by boxcount
sortLvls = function(oldFactor, ascending = TRUE) {
lvls = table(oldFactor) %>% sort(., decreasing = !ascending) %>% names()
factor(oldFactor, levels = lvls)
}
grouptag_counts$grouptag = sortLvls(grouptag_counts$grouptag, ascending = FALSE)
ggplot(grouptag_counts, aes(x = locationtimestamp, y = count, colour = grouptag)) +
geom_line(aes(group = grouptag)) +
xlab('Registration Date') + ylab('senseBox count')
```
```{r grouptag_summary}
grouptag_counts %>%
summarise(
oldest = min(locationtimestamp),
newest = max(locationtimestamp),
count = max(count)
) %>%
arrange(desc(count))
```
# Plot rate of growth and inactivity per week
First we group the boxes by `locationtimestamp` into bins of one week:
```{r growthrate_registered, warning=FALSE, message=FALSE, results='hide'}
bins = 'week'
mvavg_bins = 6
growth = boxes %>%
mutate(week = cut(as.Date(locationtimestamp), breaks = bins)) %>%
group_by(week) %>%
summarize(count = length(week)) %>%
mutate(event = 'registered')
```
We can do the same for `updatedAt`, which informs us about the last change to
a box, including uploaded measurements. As a lot of boxes were "updated" by the database
migration, many of them are updated at 2022-03-30, so we try to use the `lastMeasurement`
attribute instead of `updatedAt`. This leads to fewer boxes but also automatically excludes
boxes which were created but never made a measurement.
This method of determining inactive boxes is fairly inaccurate and should be
considered an approximation, because we have no information about intermediate
inactive phases.
Also deleted boxes would probably have a big impact here.
```{r growthrate_inactive, warning=FALSE, message=FALSE, results='hide'}
inactive = boxes %>%
# remove boxes that were updated in the last two days,
# b/c any box becomes inactive at some point by definition of updatedAt
filter(lastMeasurement < now() - days(2)) %>%
mutate(week = cut(as.Date(lastMeasurement), breaks = bins)) %>%
filter(as.Date(week) > as.Date("2021-12-31")) %>%
group_by(week) %>%
summarize(count = length(week)) %>%
mutate(event = 'inactive')
```
Now we can combine both datasets for plotting:
```{r growthrate, warning=FALSE, message=FALSE, results='hide'}
boxes_by_date = bind_rows(growth, inactive) %>% group_by(event)
ggplot(boxes_by_date, aes(x = as.Date(week), colour = event)) +
xlab('Time') + ylab(paste('rate per ', bins)) +
scale_x_date(date_breaks="years", date_labels="%Y") +
scale_colour_manual(values = c(registered = 'lightgreen', inactive = 'grey')) +
geom_point(aes(y = count), size = 0.5) +
# moving average, make first and last value NA (to ensure identical length of vectors)
geom_line(aes(y = rollmean(count, mvavg_bins, fill = list(NA, NULL, NA))))
```
And see in which weeks the most boxes become (in)active:
```{r table_mostregistrations}
boxes_by_date %>%
filter(count > 50) %>%
arrange(desc(count))
```
# Plot duration of boxes being active {.tabset}
While we are looking at `locationtimestamp` and `lastMeasurement`, we can also extract the duration of activity
of each box, and look at metrics by exposure and grouptag once more:
## ...by exposure
```{r exposure_duration, message=FALSE}
durations = boxes %>%
group_by(exposure) %>%
filter(!is.na(lastMeasurement)) %>%
mutate(duration = difftime(lastMeasurement, locationtimestamp, units='days')) %>%
filter(duration >= 0)
ggplot(durations, aes(x = exposure, y = duration)) +
geom_boxplot() +
coord_flip() + ylab('Duration active in Days')
```
The time of activity averages at only `r round(mean(durations$duration))` days,
though there are boxes with `r round(max(durations$duration))` days of activity,
spanning a large chunk of openSenseMap's existence.
## ...by grouptag
```{r grouptag_duration, message=FALSE}
durations = boxes %>%
filter(!is.na(lastMeasurement)) %>%
group_by(grouptag) %>%
# only include grouptags with 20 or more members
filter(length(grouptag) >= 15 & !is.na(grouptag) & !is.na(lastMeasurement)) %>%
mutate(duration = difftime(lastMeasurement, locationtimestamp, units='days')) %>%
filter(duration >= 0)
ggplot(durations, aes(x = grouptag, y = duration)) +
geom_boxplot() +
coord_flip() + ylab('Duration active in Days')
durations %>%
summarize(
duration_avg = round(mean(duration)),
duration_min = round(min(duration)),
duration_max = round(max(duration)),
oldest_box = round(max(difftime(now(), locationtimestamp, units='days')))
) %>%
arrange(desc(duration_avg))
```
The time of activity averages at only `r round(mean(durations$duration))` days,
though there are boxes with `r round(max(durations$duration))` days of activity,
spanning a large chunk of openSenseMap's existence.
## ...by year of registration
This is less useful, as older boxes are active for a longer time by definition.
If you have an idea how to compensate for that, please send a [Pull Request][PR]!
```{r year_duration, message=FALSE}
# NOTE: boxes older than 2016 missing due to missing updatedAt in database
duration = boxes %>%
mutate(year = cut(as.Date(locationtimestamp), breaks = 'year')) %>%
group_by(year) %>%
filter(!is.na(lastMeasurement)) %>%
mutate(duration = difftime(lastMeasurement, locationtimestamp, units='days')) %>%
filter(duration >= 0)
ggplot(duration, aes(x = substr(as.character(year), 0, 4), y = duration)) +
geom_boxplot() +
coord_flip() + ylab('Duration active in Days') + xlab('Year of Registration')
```
# More Visualisations
Other visualisations come to mind, and are left as an exercise to the reader.
If you implemented some, feel free to add them to this vignette via a [Pull Request][PR].
* growth by phenomenon
* growth by location -> (interactive) map
* set inactive rate in relation to total box count
* filter timespans with big dips in growth rate, and extrapolate the amount of
senseBoxes that could be on the platform today, assuming there were no production issues ;)
[PR]: https://github.com/sensebox/opensensmapr/pulls

File diff suppressed because one or more lines are too long

View file

@ -1,64 +1,75 @@
## ----setup, include=FALSE------------------------------------------------ ## ----setup, include=FALSE-----------------------------------------------------
knitr::opts_chunk$set(echo = TRUE) knitr::opts_chunk$set(echo = TRUE)
## ----results = F--------------------------------------------------------- ## ----results = FALSE----------------------------------------------------------
library(magrittr) library(magrittr)
library(opensensmapr) library(opensensmapr)
all_sensors = osem_boxes() # all_sensors = osem_boxes(cache = '.')
all_sensors = readRDS('boxes_precomputed.rds') # read precomputed file to save resources
## ------------------------------------------------------------------------ ## -----------------------------------------------------------------------------
summary(all_sensors) summary(all_sensors)
## ----message=F, warning=F------------------------------------------------ ## ---- message=FALSE, warning=FALSE--------------------------------------------
if (!require('maps')) install.packages('maps')
if (!require('maptools')) install.packages('maptools')
if (!require('rgeos')) install.packages('rgeos')
plot(all_sensors) plot(all_sensors)
## ------------------------------------------------------------------------ ## -----------------------------------------------------------------------------
phenoms = osem_phenomena(all_sensors) phenoms = osem_phenomena(all_sensors)
str(phenoms) str(phenoms)
## ------------------------------------------------------------------------ ## -----------------------------------------------------------------------------
phenoms[phenoms > 20] phenoms[phenoms > 20]
## ----results = F--------------------------------------------------------- ## ----results = FALSE, eval=FALSE----------------------------------------------
pm25_sensors = osem_boxes( # pm25_sensors = osem_boxes(
exposure = 'outdoor', # exposure = 'outdoor',
date = Sys.time(), # ±4 hours # date = Sys.time(), # ±4 hours
phenomenon = 'PM2.5' # phenomenon = 'PM2.5'
) # )
## -----------------------------------------------------------------------------
pm25_sensors = readRDS('pm25_sensors.rds') # read precomputed file to save resources
## ------------------------------------------------------------------------
summary(pm25_sensors) summary(pm25_sensors)
plot(pm25_sensors) plot(pm25_sensors)
## ------------------------------------------------------------------------ ## ---- results=FALSE, message=FALSE--------------------------------------------
library(sf) library(sf)
library(units) library(units)
library(lubridate) library(lubridate)
library(dplyr)
# construct a bounding box: 12 kilometers around Berlin
berlin = st_point(c(13.4034, 52.5120)) %>%
st_sfc(crs = 4326) %>%
st_transform(3857) %>% # allow setting a buffer in meters
st_buffer(units::set_units(12, km)) %>%
st_transform(4326) %>% # the opensensemap expects WGS 84
st_bbox()
## ----results = F--------------------------------------------------------- ## ----bbox, results = FALSE, eval=FALSE----------------------------------------
pm25 = osem_measurements( # # construct a bounding box: 12 kilometers around Berlin
berlin, # berlin = st_point(c(13.4034, 52.5120)) %>%
phenomenon = 'PM2.5', # st_sfc(crs = 4326) %>%
from = now() - days(7), # defaults to 2 days # st_transform(3857) %>% # allow setting a buffer in meters
to = now() # st_buffer(set_units(12, km)) %>%
) # st_transform(4326) %>% # the opensensemap expects WGS 84
# st_bbox()
# pm25 = osem_measurements(
# berlin,
# phenomenon = 'PM2.5',
# from = now() - days(3), # defaults to 2 days
# to = now()
# )
#
## -----------------------------------------------------------------------------
pm25 = readRDS('pm25_berlin.rds') # read precomputed file to save resources
plot(pm25) plot(pm25)
## ------------------------------------------------------------------------ ## ---- warning=FALSE-----------------------------------------------------------
pm25_sf = osem_as_sf(pm25) outliers = filter(pm25, value > 100)$sensorId
plot(st_geometry(pm25_sf), axes = T) bad_sensors = outliers[, drop = TRUE] %>% levels()
pm25 = mutate(pm25, invalid = sensorId %in% bad_sensors)
## -----------------------------------------------------------------------------
st_as_sf(pm25) %>% st_geometry() %>% plot(col = factor(pm25$invalid), axes = TRUE)
## -----------------------------------------------------------------------------
pm25 %>% filter(invalid == FALSE) %>% plot()

View file

@ -1,5 +1,5 @@
--- ---
title: "Analyzing environmental sensor data from openSenseMap.org in R" title: "Exploring the openSenseMap Dataset"
author: "Norwin Roosen" author: "Norwin Roosen"
date: "`r Sys.Date()`" date: "`r Sys.Date()`"
output: output:
@ -8,7 +8,7 @@ output:
fig_width: 6 fig_width: 6
fig_height: 4 fig_height: 4
vignette: > vignette: >
%\VignetteIndexEntry{Analyzing environmental sensor data from openSenseMap.org in R} %\VignetteIndexEntry{Exploring the openSenseMap Dataset}
%\VignetteEngine{knitr::rmarkdown} %\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8} %\VignetteEncoding{UTF-8}
--- ---
@ -17,48 +17,38 @@ vignette: >
knitr::opts_chunk$set(echo = TRUE) knitr::opts_chunk$set(echo = TRUE)
``` ```
## Analyzing environmental sensor data from openSenseMap.org in R
This package provides data ingestion functions for almost any data stored on the This package provides data ingestion functions for almost any data stored on the
open data platform for environemental sensordata <https://opensensemap.org>. open data platform for environmental sensordata <https://opensensemap.org>.
Its main goals are to provide means for: Its main goals are to provide means for:
- big data analysis of the measurements stored on the platform - big data analysis of the measurements stored on the platform
- sensor metadata analysis (sensor counts, spatial distribution, temporal trends) - sensor metadata analysis (sensor counts, spatial distribution, temporal trends)
> *Please note:* The openSenseMap API is sometimes a bit unstable when streaming
long responses, which results in `curl` complaining about `Unexpected EOF`. This
bug is being worked on upstream. Meanwhile you have to retry the request when
this occurs.
### Exploring the dataset ### Exploring the dataset
Before we look at actual observations, lets get a grasp of the openSenseMap Before we look at actual observations, lets get a grasp of the openSenseMap
datasets' structure. datasets' structure.
```{r results = F} ```{r results = FALSE}
library(magrittr) library(magrittr)
library(opensensmapr) library(opensensmapr)
all_sensors = osem_boxes() # all_sensors = osem_boxes(cache = '.')
all_sensors = readRDS('boxes_precomputed.rds') # read precomputed file to save resources
``` ```
```{r} ```{r}
summary(all_sensors) summary(all_sensors)
``` ```
This gives a good overview already: As of writing this, there are more than 600 This gives a good overview already: As of writing this, there are more than 700
sensor stations, of which ~50% are currently running. Most of them are placed sensor stations, of which ~50% are currently running. Most of them are placed
outdoors and have around 5 sensors each. outdoors and have around 5 sensors each.
The oldest station is from May 2014, while the latest station was registered a The oldest station is from May 2014, while the latest station was registered a
couple of minutes ago. couple of minutes ago.
Another feature of interest is the spatial distribution of the boxes. `plot()` Another feature of interest is the spatial distribution of the boxes: `plot()`
can help us out here. This function requires a bunch of optional dependcies though. can help us out here. This function requires a bunch of optional dependencies though.
```{r message=F, warning=F}
if (!require('maps')) install.packages('maps')
if (!require('maptools')) install.packages('maptools')
if (!require('rgeos')) install.packages('rgeos')
```{r, message=FALSE, warning=FALSE}
plot(all_sensors) plot(all_sensors)
``` ```
@ -88,7 +78,7 @@ We should check how many sensor stations provide useful data: We want only those
boxes with a PM2.5 sensor, that are placed outdoors and are currently submitting boxes with a PM2.5 sensor, that are placed outdoors and are currently submitting
measurements: measurements:
```{r results = F} ```{r results = FALSE, eval=FALSE}
pm25_sensors = osem_boxes( pm25_sensors = osem_boxes(
exposure = 'outdoor', exposure = 'outdoor',
date = Sys.time(), # ±4 hours date = Sys.time(), # ±4 hours
@ -96,6 +86,8 @@ pm25_sensors = osem_boxes(
) )
``` ```
```{r} ```{r}
pm25_sensors = readRDS('pm25_sensors.rds') # read precomputed file to save resources
summary(pm25_sensors) summary(pm25_sensors)
plot(pm25_sensors) plot(pm25_sensors)
``` ```
@ -104,40 +96,61 @@ Thats still more than 200 measuring stations, we can work with that.
### Analyzing sensor data ### Analyzing sensor data
Having analyzed the available data sources, let's finally get some measurements. Having analyzed the available data sources, let's finally get some measurements.
We could call `osem_measurements(pm25_sensors)` now, however we are focussing on We could call `osem_measurements(pm25_sensors)` now, however we are focusing on
a restricted area of interest, the city of Berlin. a restricted area of interest, the city of Berlin.
Luckily we can get the measurements filtered by a bounding box: Luckily we can get the measurements filtered by a bounding box:
```{r} ```{r, results=FALSE, message=FALSE}
library(sf) library(sf)
library(units) library(units)
library(lubridate) library(lubridate)
library(dplyr)
```
Since the API takes quite long to response measurements, especially filtered on space and time, we do not run the following chunks for publication of the package on CRAN.
```{r bbox, results = FALSE, eval=FALSE}
# construct a bounding box: 12 kilometers around Berlin # construct a bounding box: 12 kilometers around Berlin
berlin = st_point(c(13.4034, 52.5120)) %>% berlin = st_point(c(13.4034, 52.5120)) %>%
st_sfc(crs = 4326) %>% st_sfc(crs = 4326) %>%
st_transform(3857) %>% # allow setting a buffer in meters st_transform(3857) %>% # allow setting a buffer in meters
st_buffer(units::set_units(12, km)) %>% st_buffer(set_units(12, km)) %>%
st_transform(4326) %>% # the opensensemap expects WGS 84 st_transform(4326) %>% # the opensensemap expects WGS 84
st_bbox() st_bbox()
```
```{r results = F}
pm25 = osem_measurements( pm25 = osem_measurements(
berlin, berlin,
phenomenon = 'PM2.5', phenomenon = 'PM2.5',
from = now() - days(7), # defaults to 2 days from = now() - days(3), # defaults to 2 days
to = now() to = now()
) )
```
```{r}
pm25 = readRDS('pm25_berlin.rds') # read precomputed file to save resources
plot(pm25) plot(pm25)
``` ```
Now we can get started with actual spatiotemporal data analysis. First plot the Now we can get started with actual spatiotemporal data analysis.
measuring locations: First, lets mask the seemingly uncalibrated sensors:
```{r} ```{r, warning=FALSE}
pm25_sf = osem_as_sf(pm25) outliers = filter(pm25, value > 100)$sensorId
plot(st_geometry(pm25_sf), axes = T) bad_sensors = outliers[, drop = TRUE] %>% levels()
pm25 = mutate(pm25, invalid = sensorId %in% bad_sensors)
``` ```
further analysis: `TODO` Then plot the measuring locations, flagging the outliers:
```{r}
st_as_sf(pm25) %>% st_geometry() %>% plot(col = factor(pm25$invalid), axes = TRUE)
```
Removing these sensors yields a nicer time series plot:
```{r}
pm25 %>% filter(invalid == FALSE) %>% plot()
```
Further analysis: comparison with LANUV data `TODO`

File diff suppressed because one or more lines are too long

View file

@ -0,0 +1,51 @@
## ----setup, results='hide'----------------------------------------------------
# this vignette requires:
library(opensensmapr)
library(jsonlite)
library(readr)
## ----cache--------------------------------------------------------------------
b = osem_boxes(grouptag = 'ifgi', cache = tempdir())
# the next identical request will hit the cache only!
b = osem_boxes(grouptag = 'ifgi', cache = tempdir())
# requests without the cache parameter will still be performed normally
b = osem_boxes(grouptag = 'ifgi')
## ----cachelisting-------------------------------------------------------------
list.files(tempdir(), pattern = 'osemcache\\..*\\.rds')
## ----cache_custom-------------------------------------------------------------
cacheDir = getwd() # current working directory
b = osem_boxes(grouptag = 'ifgi', cache = cacheDir)
# the next identical request will hit the cache only!
b = osem_boxes(grouptag = 'ifgi', cache = cacheDir)
## ----clearcache, results='hide'-----------------------------------------------
osem_clear_cache() # clears default cache
osem_clear_cache(getwd()) # clears a custom cache
## ----data, results='hide', eval=FALSE-----------------------------------------
# # first get our example data:
# measurements = osem_measurements('Windgeschwindigkeit')
## ----serialize_json, eval=FALSE-----------------------------------------------
# # serializing senseBoxes to JSON, and loading from file again:
# write(jsonlite::serializeJSON(measurements), 'measurements.json')
# measurements_from_file = jsonlite::unserializeJSON(readr::read_file('measurements.json'))
# class(measurements_from_file)
## ----serialize_attrs, eval=FALSE----------------------------------------------
# # note the toJSON call instead of serializeJSON
# write(jsonlite::toJSON(measurements), 'measurements_bad.json')
# measurements_without_attrs = jsonlite::fromJSON('measurements_bad.json')
# class(measurements_without_attrs)
#
# measurements_with_attrs = osem_as_measurements(measurements_without_attrs)
# class(measurements_with_attrs)
## ----cleanup, include=FALSE, eval=FALSE---------------------------------------
# file.remove('measurements.json', 'measurements_bad.json')

View file

@ -0,0 +1,106 @@
---
title: "Caching openSenseMap Data for Reproducibility"
author: "Norwin Roosen"
date: "`r Sys.Date()`"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Caching openSenseMap Data for Reproducibility}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
It may be useful to download data from openSenseMap only once.
For reproducible results, the data should be saved to disk, and reloaded at a
later point.
This avoids..
- changed results for queries without date parameters,
- unnecessary wait times,
- risk of API changes / API unavailability,
- stress on the openSenseMap-server.
This vignette shows how to use this built in `opensensmapr` feature, and
how to do it yourself in case you want to save to other data formats.
```{r setup, results='hide'}
# this vignette requires:
library(opensensmapr)
library(jsonlite)
library(readr)
```
## Using the opensensmapr Caching Feature
All data retrieval functions of `opensensmapr` have a built in caching feature,
which serializes an API response to disk.
Subsequent identical requests will then return the serialized data instead of making
another request.
To use this feature, just add a path to a directory to the `cache` parameter:
```{r cache}
b = osem_boxes(grouptag = 'ifgi', cache = tempdir())
# the next identical request will hit the cache only!
b = osem_boxes(grouptag = 'ifgi', cache = tempdir())
# requests without the cache parameter will still be performed normally
b = osem_boxes(grouptag = 'ifgi')
```
Looking at the cache directory we can see one file for each request, which is identified through a hash of the request URL:
```{r cachelisting}
list.files(tempdir(), pattern = 'osemcache\\..*\\.rds')
```
You can maintain multiple caches simultaneously which allows to only store data related to a script in the same directory:
```{r cache_custom}
cacheDir = getwd() # current working directory
b = osem_boxes(grouptag = 'ifgi', cache = cacheDir)
# the next identical request will hit the cache only!
b = osem_boxes(grouptag = 'ifgi', cache = cacheDir)
```
To get fresh results again, just call `osem_clear_cache()` for the respective cache:
```{r clearcache, results='hide'}
osem_clear_cache() # clears default cache
osem_clear_cache(getwd()) # clears a custom cache
```
## Custom (De-) Serialization
If you want to roll your own serialization method to support custom data formats,
here's how:
```{r data, results='hide', eval=FALSE}
# first get our example data:
measurements = osem_measurements('Windgeschwindigkeit')
```
If you are paranoid and worry about `.rds` files not being decodable anymore
in the (distant) future, you could serialize to a plain text format such as JSON.
This of course comes at the cost of storage space and performance.
```{r serialize_json, eval=FALSE}
# serializing senseBoxes to JSON, and loading from file again:
write(jsonlite::serializeJSON(measurements), 'measurements.json')
measurements_from_file = jsonlite::unserializeJSON(readr::read_file('measurements.json'))
class(measurements_from_file)
```
This method also persists the R object metadata (classes, attributes).
If you were to use a serialization method that can't persist object metadata, you
could re-apply it with the following functions:
```{r serialize_attrs, eval=FALSE}
# note the toJSON call instead of serializeJSON
write(jsonlite::toJSON(measurements), 'measurements_bad.json')
measurements_without_attrs = jsonlite::fromJSON('measurements_bad.json')
class(measurements_without_attrs)
measurements_with_attrs = osem_as_measurements(measurements_without_attrs)
class(measurements_with_attrs)
```
The same goes for boxes via `osem_as_sensebox()`.
```{r cleanup, include=FALSE, eval=FALSE}
file.remove('measurements.json', 'measurements_bad.json')
```

View file

@ -0,0 +1,440 @@
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8" />
<meta name="generator" content="pandoc" />
<meta http-equiv="X-UA-Compatible" content="IE=EDGE" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<meta name="author" content="Norwin Roosen" />
<meta name="date" content="2023-03-08" />
<title>Caching openSenseMap Data for Reproducibility</title>
<script>// Pandoc 2.9 adds attributes on both header and div. We remove the former (to
// be compatible with the behavior of Pandoc < 2.8).
document.addEventListener('DOMContentLoaded', function(e) {
var hs = document.querySelectorAll("div.section[class*='level'] > :first-child");
var i, h, a;
for (i = 0; i < hs.length; i++) {
h = hs[i];
if (!/^h[1-6]$/i.test(h.tagName)) continue; // it should be a header h1-h6
a = h.attributes;
while (a.length > 0) h.removeAttribute(a[0].name);
}
});
</script>
<style type="text/css">
code{white-space: pre-wrap;}
span.smallcaps{font-variant: small-caps;}
span.underline{text-decoration: underline;}
div.column{display: inline-block; vertical-align: top; width: 50%;}
div.hanging-indent{margin-left: 1.5em; text-indent: -1.5em;}
ul.task-list{list-style: none;}
</style>
<style type="text/css">
code {
white-space: pre;
}
.sourceCode {
overflow: visible;
}
</style>
<style type="text/css" data-origin="pandoc">
pre > code.sourceCode { white-space: pre; position: relative; }
pre > code.sourceCode > span { display: inline-block; line-height: 1.25; }
pre > code.sourceCode > span:empty { height: 1.2em; }
.sourceCode { overflow: visible; }
code.sourceCode > span { color: inherit; text-decoration: inherit; }
div.sourceCode { margin: 1em 0; }
pre.sourceCode { margin: 0; }
@media screen {
div.sourceCode { overflow: auto; }
}
@media print {
pre > code.sourceCode { white-space: pre-wrap; }
pre > code.sourceCode > span { text-indent: -5em; padding-left: 5em; }
}
pre.numberSource code
{ counter-reset: source-line 0; }
pre.numberSource code > span
{ position: relative; left: -4em; counter-increment: source-line; }
pre.numberSource code > span > a:first-child::before
{ content: counter(source-line);
position: relative; left: -1em; text-align: right; vertical-align: baseline;
border: none; display: inline-block;
-webkit-touch-callout: none; -webkit-user-select: none;
-khtml-user-select: none; -moz-user-select: none;
-ms-user-select: none; user-select: none;
padding: 0 4px; width: 4em;
color: #aaaaaa;
}
pre.numberSource { margin-left: 3em; border-left: 1px solid #aaaaaa; padding-left: 4px; }
div.sourceCode
{ }
@media screen {
pre > code.sourceCode > span > a:first-child::before { text-decoration: underline; }
}
code span.al { color: #ff0000; font-weight: bold; }
code span.an { color: #60a0b0; font-weight: bold; font-style: italic; }
code span.at { color: #7d9029; }
code span.bn { color: #40a070; }
code span.bu { color: #008000; }
code span.cf { color: #007020; font-weight: bold; }
code span.ch { color: #4070a0; }
code span.cn { color: #880000; }
code span.co { color: #60a0b0; font-style: italic; }
code span.cv { color: #60a0b0; font-weight: bold; font-style: italic; }
code span.do { color: #ba2121; font-style: italic; }
code span.dt { color: #902000; }
code span.dv { color: #40a070; }
code span.er { color: #ff0000; font-weight: bold; }
code span.ex { }
code span.fl { color: #40a070; }
code span.fu { color: #06287e; }
code span.im { color: #008000; font-weight: bold; }
code span.in { color: #60a0b0; font-weight: bold; font-style: italic; }
code span.kw { color: #007020; font-weight: bold; }
code span.op { color: #666666; }
code span.ot { color: #007020; }
code span.pp { color: #bc7a00; }
code span.sc { color: #4070a0; }
code span.ss { color: #bb6688; }
code span.st { color: #4070a0; }
code span.va { color: #19177c; }
code span.vs { color: #4070a0; }
code span.wa { color: #60a0b0; font-weight: bold; font-style: italic; }
</style>
<script>
// apply pandoc div.sourceCode style to pre.sourceCode instead
(function() {
var sheets = document.styleSheets;
for (var i = 0; i < sheets.length; i++) {
if (sheets[i].ownerNode.dataset["origin"] !== "pandoc") continue;
try { var rules = sheets[i].cssRules; } catch (e) { continue; }
var j = 0;
while (j < rules.length) {
var rule = rules[j];
// check if there is a div.sourceCode rule
if (rule.type !== rule.STYLE_RULE || rule.selectorText !== "div.sourceCode") {
j++;
continue;
}
var style = rule.style.cssText;
// check if color or background-color is set
if (rule.style.color === '' && rule.style.backgroundColor === '') {
j++;
continue;
}
// replace div.sourceCode by a pre.sourceCode rule
sheets[i].deleteRule(j);
sheets[i].insertRule('pre.sourceCode{' + style + '}', j);
}
}
})();
</script>
<style type="text/css">body {
background-color: #fff;
margin: 1em auto;
max-width: 700px;
overflow: visible;
padding-left: 2em;
padding-right: 2em;
font-family: "Open Sans", "Helvetica Neue", Helvetica, Arial, sans-serif;
font-size: 14px;
line-height: 1.35;
}
#TOC {
clear: both;
margin: 0 0 10px 10px;
padding: 4px;
width: 400px;
border: 1px solid #CCCCCC;
border-radius: 5px;
background-color: #f6f6f6;
font-size: 13px;
line-height: 1.3;
}
#TOC .toctitle {
font-weight: bold;
font-size: 15px;
margin-left: 5px;
}
#TOC ul {
padding-left: 40px;
margin-left: -1.5em;
margin-top: 5px;
margin-bottom: 5px;
}
#TOC ul ul {
margin-left: -2em;
}
#TOC li {
line-height: 16px;
}
table {
margin: 1em auto;
border-width: 1px;
border-color: #DDDDDD;
border-style: outset;
border-collapse: collapse;
}
table th {
border-width: 2px;
padding: 5px;
border-style: inset;
}
table td {
border-width: 1px;
border-style: inset;
line-height: 18px;
padding: 5px 5px;
}
table, table th, table td {
border-left-style: none;
border-right-style: none;
}
table thead, table tr.even {
background-color: #f7f7f7;
}
p {
margin: 0.5em 0;
}
blockquote {
background-color: #f6f6f6;
padding: 0.25em 0.75em;
}
hr {
border-style: solid;
border: none;
border-top: 1px solid #777;
margin: 28px 0;
}
dl {
margin-left: 0;
}
dl dd {
margin-bottom: 13px;
margin-left: 13px;
}
dl dt {
font-weight: bold;
}
ul {
margin-top: 0;
}
ul li {
list-style: circle outside;
}
ul ul {
margin-bottom: 0;
}
pre, code {
background-color: #f7f7f7;
border-radius: 3px;
color: #333;
white-space: pre-wrap;
}
pre {
border-radius: 3px;
margin: 5px 0px 10px 0px;
padding: 10px;
}
pre:not([class]) {
background-color: #f7f7f7;
}
code {
font-family: Consolas, Monaco, 'Courier New', monospace;
font-size: 85%;
}
p > code, li > code {
padding: 2px 0px;
}
div.figure {
text-align: center;
}
img {
background-color: #FFFFFF;
padding: 2px;
border: 1px solid #DDDDDD;
border-radius: 3px;
border: 1px solid #CCCCCC;
margin: 0 5px;
}
h1 {
margin-top: 0;
font-size: 35px;
line-height: 40px;
}
h2 {
border-bottom: 4px solid #f7f7f7;
padding-top: 10px;
padding-bottom: 2px;
font-size: 145%;
}
h3 {
border-bottom: 2px solid #f7f7f7;
padding-top: 10px;
font-size: 120%;
}
h4 {
border-bottom: 1px solid #f7f7f7;
margin-left: 8px;
font-size: 105%;
}
h5, h6 {
border-bottom: 1px solid #ccc;
font-size: 105%;
}
a {
color: #0033dd;
text-decoration: none;
}
a:hover {
color: #6666ff; }
a:visited {
color: #800080; }
a:visited:hover {
color: #BB00BB; }
a[href^="http:"] {
text-decoration: underline; }
a[href^="https:"] {
text-decoration: underline; }
code > span.kw { color: #555; font-weight: bold; }
code > span.dt { color: #902000; }
code > span.dv { color: #40a070; }
code > span.bn { color: #d14; }
code > span.fl { color: #d14; }
code > span.ch { color: #d14; }
code > span.st { color: #d14; }
code > span.co { color: #888888; font-style: italic; }
code > span.ot { color: #007020; }
code > span.al { color: #ff0000; font-weight: bold; }
code > span.fu { color: #900; font-weight: bold; }
code > span.er { color: #a61717; background-color: #e3d2d2; }
</style>
</head>
<body>
<h1 class="title toc-ignore">Caching openSenseMap Data for
Reproducibility</h1>
<h4 class="author">Norwin Roosen</h4>
<h4 class="date">2023-03-08</h4>
<p>It may be useful to download data from openSenseMap only once. For
reproducible results, the data should be saved to disk, and reloaded at
a later point.</p>
<p>This avoids..</p>
<ul>
<li>changed results for queries without date parameters,</li>
<li>unnecessary wait times,</li>
<li>risk of API changes / API unavailability,</li>
<li>stress on the openSenseMap-server.</li>
</ul>
<p>This vignette shows how to use this built in
<code>opensensmapr</code> feature, and how to do it yourself in case you
want to save to other data formats.</p>
<div class="sourceCode" id="cb1"><pre class="sourceCode r"><code class="sourceCode r"><span id="cb1-1"><a href="#cb1-1" aria-hidden="true" tabindex="-1"></a><span class="co"># this vignette requires:</span></span>
<span id="cb1-2"><a href="#cb1-2" aria-hidden="true" tabindex="-1"></a><span class="fu">library</span>(opensensmapr)</span>
<span id="cb1-3"><a href="#cb1-3" aria-hidden="true" tabindex="-1"></a><span class="fu">library</span>(jsonlite)</span>
<span id="cb1-4"><a href="#cb1-4" aria-hidden="true" tabindex="-1"></a><span class="fu">library</span>(readr)</span></code></pre></div>
<div id="using-the-opensensmapr-caching-feature" class="section level2">
<h2>Using the opensensmapr Caching Feature</h2>
<p>All data retrieval functions of <code>opensensmapr</code> have a
built in caching feature, which serializes an API response to disk.
Subsequent identical requests will then return the serialized data
instead of making another request.</p>
<p>To use this feature, just add a path to a directory to the
<code>cache</code> parameter:</p>
<div class="sourceCode" id="cb2"><pre class="sourceCode r"><code class="sourceCode r"><span id="cb2-1"><a href="#cb2-1" aria-hidden="true" tabindex="-1"></a>b <span class="ot">=</span> <span class="fu">osem_boxes</span>(<span class="at">grouptag =</span> <span class="st">&#39;ifgi&#39;</span>, <span class="at">cache =</span> <span class="fu">tempdir</span>())</span>
<span id="cb2-2"><a href="#cb2-2" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb2-3"><a href="#cb2-3" aria-hidden="true" tabindex="-1"></a><span class="co"># the next identical request will hit the cache only!</span></span>
<span id="cb2-4"><a href="#cb2-4" aria-hidden="true" tabindex="-1"></a>b <span class="ot">=</span> <span class="fu">osem_boxes</span>(<span class="at">grouptag =</span> <span class="st">&#39;ifgi&#39;</span>, <span class="at">cache =</span> <span class="fu">tempdir</span>())</span>
<span id="cb2-5"><a href="#cb2-5" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb2-6"><a href="#cb2-6" aria-hidden="true" tabindex="-1"></a><span class="co"># requests without the cache parameter will still be performed normally</span></span>
<span id="cb2-7"><a href="#cb2-7" aria-hidden="true" tabindex="-1"></a>b <span class="ot">=</span> <span class="fu">osem_boxes</span>(<span class="at">grouptag =</span> <span class="st">&#39;ifgi&#39;</span>)</span></code></pre></div>
<p>Looking at the cache directory we can see one file for each request,
which is identified through a hash of the request URL:</p>
<div class="sourceCode" id="cb3"><pre class="sourceCode r"><code class="sourceCode r"><span id="cb3-1"><a href="#cb3-1" aria-hidden="true" tabindex="-1"></a><span class="fu">list.files</span>(<span class="fu">tempdir</span>(), <span class="at">pattern =</span> <span class="st">&#39;osemcache</span><span class="sc">\\</span><span class="st">..*</span><span class="sc">\\</span><span class="st">.rds&#39;</span>)</span></code></pre></div>
<pre><code>## [1] &quot;osemcache.17db5c57fc6fca4d836fa2cf30345ce8767cd61a.rds&quot;</code></pre>
<p>You can maintain multiple caches simultaneously which allows to only
store data related to a script in the same directory:</p>
<div class="sourceCode" id="cb5"><pre class="sourceCode r"><code class="sourceCode r"><span id="cb5-1"><a href="#cb5-1" aria-hidden="true" tabindex="-1"></a>cacheDir <span class="ot">=</span> <span class="fu">getwd</span>() <span class="co"># current working directory</span></span>
<span id="cb5-2"><a href="#cb5-2" aria-hidden="true" tabindex="-1"></a>b <span class="ot">=</span> <span class="fu">osem_boxes</span>(<span class="at">grouptag =</span> <span class="st">&#39;ifgi&#39;</span>, <span class="at">cache =</span> cacheDir)</span>
<span id="cb5-3"><a href="#cb5-3" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb5-4"><a href="#cb5-4" aria-hidden="true" tabindex="-1"></a><span class="co"># the next identical request will hit the cache only!</span></span>
<span id="cb5-5"><a href="#cb5-5" aria-hidden="true" tabindex="-1"></a>b <span class="ot">=</span> <span class="fu">osem_boxes</span>(<span class="at">grouptag =</span> <span class="st">&#39;ifgi&#39;</span>, <span class="at">cache =</span> cacheDir)</span></code></pre></div>
<p>To get fresh results again, just call <code>osem_clear_cache()</code>
for the respective cache:</p>
<div class="sourceCode" id="cb6"><pre class="sourceCode r"><code class="sourceCode r"><span id="cb6-1"><a href="#cb6-1" aria-hidden="true" tabindex="-1"></a><span class="fu">osem_clear_cache</span>() <span class="co"># clears default cache</span></span>
<span id="cb6-2"><a href="#cb6-2" aria-hidden="true" tabindex="-1"></a><span class="fu">osem_clear_cache</span>(<span class="fu">getwd</span>()) <span class="co"># clears a custom cache</span></span></code></pre></div>
</div>
<div id="custom-de--serialization" class="section level2">
<h2>Custom (De-) Serialization</h2>
<p>If you want to roll your own serialization method to support custom
data formats, heres how:</p>
<div class="sourceCode" id="cb7"><pre class="sourceCode r"><code class="sourceCode r"><span id="cb7-1"><a href="#cb7-1" aria-hidden="true" tabindex="-1"></a><span class="co"># first get our example data:</span></span>
<span id="cb7-2"><a href="#cb7-2" aria-hidden="true" tabindex="-1"></a>measurements <span class="ot">=</span> <span class="fu">osem_measurements</span>(<span class="st">&#39;Windgeschwindigkeit&#39;</span>)</span></code></pre></div>
<p>If you are paranoid and worry about <code>.rds</code> files not being
decodable anymore in the (distant) future, you could serialize to a
plain text format such as JSON. This of course comes at the cost of
storage space and performance.</p>
<div class="sourceCode" id="cb8"><pre class="sourceCode r"><code class="sourceCode r"><span id="cb8-1"><a href="#cb8-1" aria-hidden="true" tabindex="-1"></a><span class="co"># serializing senseBoxes to JSON, and loading from file again:</span></span>
<span id="cb8-2"><a href="#cb8-2" aria-hidden="true" tabindex="-1"></a><span class="fu">write</span>(jsonlite<span class="sc">::</span><span class="fu">serializeJSON</span>(measurements), <span class="st">&#39;measurements.json&#39;</span>)</span>
<span id="cb8-3"><a href="#cb8-3" aria-hidden="true" tabindex="-1"></a>measurements_from_file <span class="ot">=</span> jsonlite<span class="sc">::</span><span class="fu">unserializeJSON</span>(readr<span class="sc">::</span><span class="fu">read_file</span>(<span class="st">&#39;measurements.json&#39;</span>))</span>
<span id="cb8-4"><a href="#cb8-4" aria-hidden="true" tabindex="-1"></a><span class="fu">class</span>(measurements_from_file)</span></code></pre></div>
<p>This method also persists the R object metadata (classes,
attributes). If you were to use a serialization method that cant
persist object metadata, you could re-apply it with the following
functions:</p>
<div class="sourceCode" id="cb9"><pre class="sourceCode r"><code class="sourceCode r"><span id="cb9-1"><a href="#cb9-1" aria-hidden="true" tabindex="-1"></a><span class="co"># note the toJSON call instead of serializeJSON</span></span>
<span id="cb9-2"><a href="#cb9-2" aria-hidden="true" tabindex="-1"></a><span class="fu">write</span>(jsonlite<span class="sc">::</span><span class="fu">toJSON</span>(measurements), <span class="st">&#39;measurements_bad.json&#39;</span>)</span>
<span id="cb9-3"><a href="#cb9-3" aria-hidden="true" tabindex="-1"></a>measurements_without_attrs <span class="ot">=</span> jsonlite<span class="sc">::</span><span class="fu">fromJSON</span>(<span class="st">&#39;measurements_bad.json&#39;</span>)</span>
<span id="cb9-4"><a href="#cb9-4" aria-hidden="true" tabindex="-1"></a><span class="fu">class</span>(measurements_without_attrs)</span>
<span id="cb9-5"><a href="#cb9-5" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb9-6"><a href="#cb9-6" aria-hidden="true" tabindex="-1"></a>measurements_with_attrs <span class="ot">=</span> <span class="fu">osem_as_measurements</span>(measurements_without_attrs)</span>
<span id="cb9-7"><a href="#cb9-7" aria-hidden="true" tabindex="-1"></a><span class="fu">class</span>(measurements_with_attrs)</span></code></pre></div>
<p>The same goes for boxes via <code>osem_as_sensebox()</code>.</p>
</div>
<!-- code folding -->
<!-- dynamically load mathjax for compatibility with self-contained -->
<script>
(function () {
var script = document.createElement("script");
script.type = "text/javascript";
script.src = "https://mathjax.rstudio.com/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML";
document.getElementsByTagName("head")[0].appendChild(script);
})();
</script>
</body>
</html>

View file

@ -0,0 +1,25 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/archive.R
\name{archive_fetch_measurements}
\alias{archive_fetch_measurements}
\title{fetch measurements from archive from a single box, and a single sensor}
\usage{
archive_fetch_measurements(box, sensorId, fromDate, toDate, progress)
}
\arguments{
\item{box}{A sensebox data.frame with a single box}
\item{sensorId}{Character specifying the sensor}
\item{fromDate}{Start date for measurement download, must be convertable via `as.Date`.}
\item{toDate}{End date for measurement download (inclusive).}
\item{progress}{whether to print progress}
}
\value{
A \code{tbl_df} containing observations of all selected sensors for each time stamp.
}
\description{
fetch measurements from archive from a single box, and a single sensor
}

View file

@ -0,0 +1,21 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/external_generics.R
\name{filter.osem_measurements}
\alias{filter.osem_measurements}
\title{Return rows with matching conditions, while maintaining class & attributes}
\usage{
filter.osem_measurements(.data, ..., .dots)
}
\arguments{
\item{.data}{A osem_measurements data.frame to filter}
\item{...}{other arguments}
\item{.dots}{see corresponding function in package \code{\link{dplyr}}}
}
\description{
Return rows with matching conditions, while maintaining class & attributes
}
\seealso{
\code{\link[dplyr]{filter}}
}

21
man/filter.sensebox.Rd Normal file
View file

@ -0,0 +1,21 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/external_generics.R
\name{filter.sensebox}
\alias{filter.sensebox}
\title{Return rows with matching conditions, while maintaining class & attributes}
\usage{
filter.sensebox(.data, ..., .dots)
}
\arguments{
\item{.data}{A sensebox data.frame to filter}
\item{...}{other arguments}
\item{.dots}{see corresponding function in package \code{\link{dplyr}}}
}
\description{
Return rows with matching conditions, while maintaining class & attributes
}
\seealso{
\code{\link[dplyr]{filter}}
}

View file

@ -0,0 +1,21 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/external_generics.R
\name{mutate.osem_measurements}
\alias{mutate.osem_measurements}
\title{Add new variables to the data, while maintaining class & attributes}
\usage{
mutate.osem_measurements(.data, ..., .dots)
}
\arguments{
\item{.data}{A osem_measurements data.frame to mutate}
\item{...}{other arguments}
\item{.dots}{see corresponding function in package \code{\link{dplyr}}}
}
\description{
Add new variables to the data, while maintaining class & attributes
}
\seealso{
\code{\link[dplyr]{mutate}}
}

21
man/mutate.sensebox.Rd Normal file
View file

@ -0,0 +1,21 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/external_generics.R
\name{mutate.sensebox}
\alias{mutate.sensebox}
\title{Add new variables to the data, while maintaining class & attributes}
\usage{
mutate.sensebox(.data, ..., .dots)
}
\arguments{
\item{.data}{A sensebox data.frame to mutate}
\item{...}{other arguments}
\item{.dots}{see corresponding function in package \code{\link{dplyr}}}
}
\description{
Add new variables to the data, while maintaining class & attributes
}
\seealso{
\code{\link[dplyr]{mutate}}
}

View file

@ -4,41 +4,124 @@
\name{opensensmapr} \name{opensensmapr}
\alias{opensensmapr} \alias{opensensmapr}
\alias{opensensmapr-package} \alias{opensensmapr-package}
\alias{opensensmapr-package} \title{opensensmapr: Get sensor data from opensensemap.org}
\title{opensensmapr: Work with sensor data from opensensemap.org}
\description{ \description{
The opensensmapr package provides three categories functions: The opensensmapr package provides functions for
\enumerate{ \itemize{
\item retrieval of senseBoxes \item retrieval of senseBox metadata,
\item retrieval of measurements \item retrieval of senseBox measurements,
\item general stats about the openSenseMap database \item general statistics about the openSenseMap database.
} }
Additionally, helper functions are provided to ease the integration with the
\code{\link[sf]{sf}} package for spatial analysis as well as
\code{\link[dplyr]{dplyr}} for general data handling.
} }
\section{Retrieving senseBox metadata}{ \section{Retrieving senseBox metadata}{
TODO On the openSenseMap, measurements are provided by sensors which are assigned
to a sensor station ("senseBox").
A senseBox consists of a collection of sensors, a location (-history), an ID,
as well as metadata about its owner & placement.
senseBoxes can be retrieved either by ID, or as a collection with optional
filters on their metadata
\itemize{
\item \code{\link{osem_box}}: Get metadata about a single box by its ID.
\item \code{\link{osem_boxes}}: Get metadata about all boxes, optionally
filtered by their attributes.
}
The data is returned as a \code{\link{data.frame}} with the class
\code{sensebox} attached.
To help in getting an overview of the dataset additional functions are
implemented:
\itemize{
\item \code{summary.sensebox()}: Aggregate the metadata about the given
list of senseBoxes.
\item \code{plot.sensebox()}: Shows the spatial distribution of the given
list of senseBoxes on a map. Requires additional packages!
\item \code{\link{osem_phenomena}}: Get a named list with
counts of the measured phenomena of the given list of senseBoxes.
}
} }
\section{Retrieving measurements}{ \section{Retrieving measurements}{
TODO There are two ways to retrieve measurements:
\itemize{
\item \code{\link{osem_measurements_archive}}:
Downloads measurements for a \emph{single box} from the openSenseMap archive.
This function does not provide realtime data, but is suitable for long time frames.
\item \code{\link{osem_measurements}}:
This function retrieves (realtime) measurements from the API. It works for a
\emph{single phenomenon} only, but provides various filters to select sensors by
\itemize{
\item a list of senseBoxes, previously retrieved through
\code{\link{osem_box}} or \code{\link{osem_boxes}}.
\item a geographic bounding box, which can be generated with the
\code{\link[sf]{sf}} package.
\item a time frame
\item a exposure type of the given box
}
Use this function with caution for long time frames, as the API becomes
quite slow is limited to 10.000 measurements per 30 day interval.
}
Data is returned as \code{tibble} with the class \code{osem_measurements}.
} }
\section{Retrieving statistics}{ \section{Retrieving statistics}{
TODO Count statistics about the database are provided with \code{\link{osem_counts}}.
} }
\section{Working with spatial data from openSenseMap}{ \section{Using a different API instance / endpoint}{
TODO You can override the functions \code{osem_endpoint} and \code{osem_endpoint_archive}
inside the package namespace:
\code{
assignInNamespace("osem_endpoint", function() "http://mynewosem.org", "opensensmapr")
}
}
\section{Integration with other packages}{
The package aims to be compatible with the tidyverse.
Helpers are implemented to ease the further usage of the retrieved data:
\itemize{
\item \code{\link{osem_as_sensebox}} & \code{\link{osem_as_measurements}}:
Transform a foreign object to a sensebox data.frame or osem_measurements
by attaching the required classes and attributes.
\item \code{\link{st_as_sf.sensebox}} & \code{\link{st_as_sf.osem_measurements}}:
Transform the senseBoxes or measurements into an \code{\link[sf]{sf}}
compatible format for spatial analysis.
\item \code{filter.sensebox()} & \code{mutate.sensebox()}: for use with
\code{\link{dplyr}}.
}
} }
\seealso{ \seealso{
Useful links: Report bugs at \url{https://github.com/sensebox/opensensmapR/issues}
openSenseMap API: \url{https://api.opensensemap.org/}
official openSenseMap API documentation: \url{https://docs.opensensemap.org/}
}
\author{
\strong{Maintainer}: Jan Stenkamp \email{jan.stenkamp@uni-muenster.de} [contributor]
Authors:
\itemize{ \itemize{
\item \url{http://github.com/noerw/opensensmapR} \item Norwin Roosen \email{hello@nroo.de}
\item Report bugs at \url{http://github.com/noerw/opensensmapR/issues} }
Other contributors:
\itemize{
\item Daniel Nuest \email{daniel.nuest@uni-muenster.de} (\href{https://orcid.org/0000-0003-2392-6140}{ORCID}) [contributor]
} }
} }

View file

@ -0,0 +1,15 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/archive.R
\name{osem_archive_endpoint}
\alias{osem_archive_endpoint}
\title{Returns the default endpoint for the archive *download*
While the front end domain is archive.opensensemap.org, file downloads
are provided via sciebo.}
\usage{
osem_archive_endpoint()
}
\description{
Returns the default endpoint for the archive *download*
While the front end domain is archive.opensensemap.org, file downloads
are provided via sciebo.
}

View file

@ -0,0 +1,18 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/measurement_utils.R
\name{osem_as_measurements}
\alias{osem_as_measurements}
\title{Converts a foreign object to an osem_measurements data.frame.}
\usage{
osem_as_measurements(x)
}
\arguments{
\item{x}{A data.frame to attach the class to.
Should have at least a `value` and `createdAt` column.}
}
\value{
data.frame of class \code{osem_measurements}
}
\description{
Converts a foreign object to an osem_measurements data.frame.
}

17
man/osem_as_sensebox.Rd Normal file
View file

@ -0,0 +1,17 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/box_utils.R
\name{osem_as_sensebox}
\alias{osem_as_sensebox}
\title{Converts a foreign object to a sensebox data.frame.}
\usage{
osem_as_sensebox(x)
}
\arguments{
\item{x}{A data.frame to attach the class to}
}
\value{
data.frame of class \code{sensebox}
}
\description{
Converts a foreign object to a sensebox data.frame.
}

View file

@ -1,21 +0,0 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/utils.R
\name{osem_as_sf}
\alias{osem_as_sf}
\title{Convert a \code{sensebox} or \code{osem_measurements} dataframe to an
\code{\link[sf]{st_sf}} object.}
\usage{
osem_as_sf(x, ...)
}
\arguments{
\item{x}{The object to convert}
\item{...}{maybe more objects to convert}
}
\value{
The object with an st_geometry column attached.
}
\description{
Convert a \code{sensebox} or \code{osem_measurements} dataframe to an
\code{\link[sf]{st_sf}} object.
}

View file

@ -4,12 +4,15 @@
\alias{osem_box} \alias{osem_box}
\title{Get a single senseBox by its ID} \title{Get a single senseBox by its ID}
\usage{ \usage{
osem_box(boxId, endpoint = "https://api.opensensemap.org") osem_box(boxId, endpoint = osem_endpoint(), cache = NA)
} }
\arguments{ \arguments{
\item{boxId}{A string containing a senseBox ID} \item{boxId}{A string containing a senseBox ID}
\item{endpoint}{The URL of the openSenseMap API instance} \item{endpoint}{The URL of the openSenseMap API instance}
\item{cache}{Whether to cache the result, defaults to false.
If a valid path to a directory is given, the response will be cached there. Subsequent identical requests will return the cached data instead.}
} }
\value{ \value{
A \code{sensebox data.frame} containing a box in each row A \code{sensebox data.frame} containing a box in each row
@ -18,12 +21,24 @@ A \code{sensebox data.frame} containing a box in each row
Get a single senseBox by its ID Get a single senseBox by its ID
} }
\examples{ \examples{
# get a specific box by ID \dontrun{
b = osem_box('593bcd656ccf3b0011791f5a') # get a specific box by ID
b = osem_box('57000b8745fd40c8196ad04c')
# get a specific box by ID from a custom (selfhosted) openSenseMap API
b = osem_box('51030b8725fd30c2196277da', 'http://api.my-custom-osem.com')
# get a specific box by ID and cache the response, in order to provide
# reproducible results in the future.
b = osem_box('51030b8725fd30c2196277da', cache = tempdir())
}
} }
\seealso{ \seealso{
\href{https://docs.opensensemap.org/#api-Measurements-findAllBoxes}{openSenseMap API documentation (web)} \href{https://docs.opensensemap.org/#api-Measurements-findAllBoxes}{openSenseMap API documentation (web)}
\code{\link{osem_phenomena}} \code{\link{osem_phenomena}}
\code{\link{osem_boxes}}
\code{\link{osem_clear_cache}}
} }

View file

@ -0,0 +1,19 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/archive.R
\name{osem_box_to_archivename}
\alias{osem_box_to_archivename}
\title{replace chars in box name according to archive script:
https://github.com/sensebox/osem-archiver/blob/612e14b/helpers.sh#L66}
\usage{
osem_box_to_archivename(box)
}
\arguments{
\item{box}{A sensebox data.frame}
}
\value{
character with archive identifier for each box
}
\description{
replace chars in box name according to archive script:
https://github.com/sensebox/osem-archiver/blob/612e14b/helpers.sh#L66
}

View file

@ -4,9 +4,19 @@
\alias{osem_boxes} \alias{osem_boxes}
\title{Get a set of senseBoxes from the openSenseMap} \title{Get a set of senseBoxes from the openSenseMap}
\usage{ \usage{
osem_boxes(exposure = NA, model = NA, grouptag = NA, date = NA, osem_boxes(
from = NA, to = NA, phenomenon = NA, exposure = NA,
endpoint = "https://api.opensensemap.org") model = NA,
grouptag = NA,
date = NA,
from = NA,
to = NA,
phenomenon = NA,
bbox = NA,
endpoint = osem_endpoint(),
progress = TRUE,
cache = NA
)
} }
\arguments{ \arguments{
\item{exposure}{Only return boxes with the given exposure ('indoor', 'outdoor', 'mobile')} \item{exposure}{Only return boxes with the given exposure ('indoor', 'outdoor', 'mobile')}
@ -24,7 +34,18 @@ osem_boxes(exposure = NA, model = NA, grouptag = NA, date = NA,
\item{phenomenon}{Only return boxes that measured the given phenomenon in the \item{phenomenon}{Only return boxes that measured the given phenomenon in the
time interval as specified through \code{date} or \code{from / to}} time interval as specified through \code{date} or \code{from / to}}
\item{bbox}{Only return boxes that are within the given boundingbox,
vector of 4 WGS84 coordinates.
Order is: longitude southwest, latitude southwest, longitude northeast, latitude northeast.
Minimal and maximal values are: -180, 180 for longitude and -90, 90 for latitude.}
\item{endpoint}{The URL of the openSenseMap API instance} \item{endpoint}{The URL of the openSenseMap API instance}
\item{progress}{Whether to print download progress information, defaults to \code{TRUE}}
\item{cache}{Whether to cache the result, defaults to false.
If a valid path to a directory is given, the response will be cached there.
Subsequent identical requests will return the cached data instead.}
} }
\value{ \value{
A \code{sensebox data.frame} containing a box in each row A \code{sensebox data.frame} containing a box in each row
@ -39,18 +60,47 @@ Note that some filters do not work together:
} }
} }
\examples{ \examples{
# get *all* boxes available on the API
b = osem_boxes()
# get all boxes with grouptag 'ifgi' that are placed outdoors \dontrun{
b = osem_boxes(grouptag = 'ifgi', exposure = 'outdoor') # get *all* boxes available on the API
b = osem_boxes()
# get all boxes that have measured PM2.5 in the last 4 hours # get all boxes with grouptag 'ifgi' that are placed outdoors
b = osem_boxes(date = Sys.time(), phenomenon = 'PM2.5') b = osem_boxes(grouptag = 'ifgi', exposure = 'outdoor')
# get all boxes with model 'luftdaten_sds011_dht22'
b = osem_boxes(grouptag = 'ifgi')
# get all boxes that have measured PM2.5 in the last 4 hours
b = osem_boxes(date = Sys.time(), phenomenon = 'PM2.5')
# get all boxes that have measured PM2.5 between Jan & Feb 2018
library(lubridate)
b = osem_boxes(
from = date('2018-01-01'),
to = date('2018-02-01'),
phenomenon = 'PM2.5'
)
# get all boxes from a custom (selfhosted) openSenseMap API
b = osem_box(endpoint = 'http://api.my-custom-osem.com')
# get all boxes and cache the response, in order to provide
# reproducible results in the future. Also useful for development
# to avoid repeated loading times!
b = osem_boxes(cache = getwd())
b = osem_boxes(cache = getwd())
# get *all* boxes available on the API, without showing download progress
b = osem_boxes(progress = FALSE)
}
} }
\seealso{ \seealso{
\href{https://docs.opensensemap.org/#api-Measurements-findAllBoxes}{openSenseMap API documentation (web)} \href{https://docs.opensensemap.org/#api-Measurements-findAllBoxes}{openSenseMap API documentation (web)}
\code{\link{osem_phenomena}} \code{\link{osem_phenomena}}
\code{\link{osem_box}}
\code{\link{osem_clear_cache}}
} }

29
man/osem_clear_cache.Rd Normal file
View file

@ -0,0 +1,29 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/api.R
\name{osem_clear_cache}
\alias{osem_clear_cache}
\title{Purge cached responses from the given cache directory}
\usage{
osem_clear_cache(location = tempdir())
}
\arguments{
\item{location}{A path to the cache directory, defaults to the
sessions' \code{tempdir()}}
}
\value{
Boolean whether the deletion was successful
}
\description{
Purge cached responses from the given cache directory
}
\examples{
\dontrun{
osem_boxes(cache = tempdir())
osem_clear_cache()
cachedir = paste(getwd(), 'osemcache', sep = '/')
dir.create(file.path(cachedir), showWarnings = FALSE)
osem_boxes(cache = cachedir)
osem_clear_cache(cachedir)
}
}

View file

@ -4,10 +4,14 @@
\alias{osem_counts} \alias{osem_counts}
\title{Get count statistics of the openSenseMap Instance} \title{Get count statistics of the openSenseMap Instance}
\usage{ \usage{
osem_counts(endpoint = "https://api.opensensemap.org") osem_counts(endpoint = osem_endpoint(), cache = NA)
} }
\arguments{ \arguments{
\item{endpoint}{The URL of the openSenseMap API} \item{endpoint}{The URL of the openSenseMap API}
\item{cache}{Whether to cache the result, defaults to false.
If a valid path to a directory is given, the response will be cached there.
Subsequent identical requests will return the cached data instead.}
} }
\value{ \value{
A named \code{list} containing the counts A named \code{list} containing the counts

14
man/osem_endpoint.Rd Normal file
View file

@ -0,0 +1,14 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/api.R
\name{osem_endpoint}
\alias{osem_endpoint}
\title{Get the default openSenseMap API endpoint}
\usage{
osem_endpoint()
}
\value{
A character string with the HTTP URL of the openSenseMap API
}
\description{
Get the default openSenseMap API endpoint
}

View file

@ -0,0 +1,17 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/api.R
\name{osem_ensure_api_available}
\alias{osem_ensure_api_available}
\title{Check if the given openSenseMap API endpoint is available}
\usage{
osem_ensure_api_available(endpoint = osem_endpoint())
}
\arguments{
\item{endpoint}{The API base URL to check, defaulting to \code{\link{osem_endpoint}}}
}
\value{
\code{TRUE} if the API is available, otherwise \code{stop()} is called.
}
\description{
Check if the given openSenseMap API endpoint is available
}

View file

@ -0,0 +1,17 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/archive.R
\name{osem_ensure_archive_available}
\alias{osem_ensure_archive_available}
\title{Check if the given openSenseMap archive endpoint is available}
\usage{
osem_ensure_archive_available(endpoint = osem_archive_endpoint())
}
\arguments{
\item{endpoint}{The archive base URL to check, defaulting to \code{\link{osem_archive_endpoint}}}
}
\value{
\code{TRUE} if the archive is available, otherwise \code{stop()} is called.
}
\description{
Check if the given openSenseMap archive endpoint is available
}

View file

@ -5,18 +5,37 @@
\alias{osem_measurements.default} \alias{osem_measurements.default}
\alias{osem_measurements.bbox} \alias{osem_measurements.bbox}
\alias{osem_measurements.sensebox} \alias{osem_measurements.sensebox}
\title{Get the Measurements of a Phenomenon on opensensemap.org} \title{Fetch the Measurements of a Phenomenon on opensensemap.org}
\usage{ \usage{
osem_measurements(x, ...) osem_measurements(x, ...)
\method{osem_measurements}{default}(x, ...) \method{osem_measurements}{default}(x, ...)
\method{osem_measurements}{bbox}(x, phenomenon, exposure = NA, from = NA, \method{osem_measurements}{bbox}(
to = NA, columns = NA, ..., endpoint = "https://api.opensensemap.org") x,
phenomenon,
exposure = NA,
from = NA,
to = NA,
columns = NA,
...,
endpoint = osem_endpoint(),
progress = TRUE,
cache = NA
)
\method{osem_measurements}{sensebox}(x, phenomenon, exposure = NA, \method{osem_measurements}{sensebox}(
from = NA, to = NA, columns = NA, ..., x,
endpoint = "https://api.opensensemap.org") phenomenon,
exposure = NA,
from = NA,
to = NA,
columns = NA,
...,
endpoint = osem_endpoint(),
progress = TRUE,
cache = NA
)
} }
\arguments{ \arguments{
\item{x}{Depending on the method, either \item{x}{Depending on the method, either
@ -37,9 +56,14 @@ osem_measurements(x, ...)
\item{to}{A \code{POSIXt} like object to select a time interval} \item{to}{A \code{POSIXt} like object to select a time interval}
\item{columns}{Select specific column in the output (see oSeM documentation)} \item{columns}{Select specific column in the output (see openSenseMap API documentation)}
\item{endpoint}{The URL of the openSenseMap API} \item{endpoint}{The URL of the openSenseMap API}
\item{progress}{Whether to print download progress information}
\item{cache}{Whether to cache the result, defaults to false.
If a valid path to a directory is given, the response will be cached there. Subsequent identical requests will return the cached data instead.}
} }
\value{ \value{
An \code{osem_measurements data.frame} containing the An \code{osem_measurements data.frame} containing the
@ -52,38 +76,87 @@ a bounding box spanning the whole world.
} }
\section{Methods (by class)}{ \section{Methods (by class)}{
\itemize{ \itemize{
\item \code{default}: Get measurements from \strong{all} senseBoxes. \item \code{osem_measurements(default)}: Get measurements from \strong{all} senseBoxes.
\item \code{bbox}: Get measurements by a spatial filter. \item \code{osem_measurements(bbox)}: Get measurements by a spatial filter.
\item \code{osem_measurements(sensebox)}: Get measurements from a set of senseBoxes.
\item \code{sensebox}: Get measurements from a set of senseBoxes.
}} }}
\examples{ \examples{
# get measurements from all boxes
\dontrun{ \dontrun{
osem_measurements('PM2.5') # get measurements from all boxes on the phenomenon 'PM10' from the last 48h
m = osem_measurements('PM10')
# get measurements from all mobile boxes on the phenomenon 'PM10' from the last 48h
m = osem_measurements('PM10', exposure = 'mobile')
# get measurements and cache them locally in the working directory.
# subsequent identical requests will load from the cache instead, ensuring
# reproducibility and saving time and bandwidth!
m = osem_measurements('PM10', exposure = 'mobile', cache = getwd())
m = osem_measurements('PM10', exposure = 'mobile', cache = getwd())
# get measurements returning a custom selection of columns
m = osem_measurements('PM10', exposure = 'mobile', columns = c(
'value',
'boxId',
'sensorType',
'lat',
'lon',
'height'
))
} }
\dontrun{
# get measurements from sensors within a custom WGS84 bounding box
bbox = structure(c(7, 51, 8, 52), class = 'bbox')
m = osem_measurements(bbox, 'Temperatur')
# get measurements from sensors within a bounding box # construct a bounding box 12km around berlin using the sf package,
bbox = structure(c(7, 51, 8, 52), class = 'bbox') # and get measurements from stations within that box
osem_measurements(bbox, 'Temperatur') library(sf)
library(units)
bbox2 = st_point(c(13.4034, 52.5120)) \%>\%
st_sfc(crs = 4326) \%>\%
st_transform(3857) \%>\% # allow setting a buffer in meters
st_buffer(set_units(12, km)) \%>\%
st_transform(4326) \%>\% # the opensensemap expects WGS 84
st_bbox()
m = osem_measurements(bbox2, 'Temperatur', exposure = 'outdoor')
points = sf::st_multipoint(matrix(c(7, 8, 51, 52), 2, 2)) # construct a bounding box from two points,
bbox2 = sf::st_bbox(points) # and get measurements from stations within that box
osem_measurements(bbox2, 'Temperatur', exposure = 'outdoor') points = st_multipoint(matrix(c(7.5, 7.8, 51.7, 52), 2, 2))
bbox3 = st_bbox(points)
m = osem_measurements(bbox2, 'Temperatur', exposure = 'outdoor')
}
\donttest{
# get measurements from a set of boxes
b = osem_boxes(grouptag = 'ifgi')
m4 = osem_measurements(b, phenomenon = 'Temperatur')
# get measurements from a set of boxes # ...or a single box
b = osem_boxes(grouptag = 'ifgi') b = osem_box('57000b8745fd40c8196ad04c')
osem_measurements(b, phenomenon = 'Temperatur') m5 = osem_measurements(b, phenomenon = 'Temperatur')
# ...or a single box
b = osem_box('593bcd656ccf3b0011791f5a')
osem_measurements(b, phenomenon = 'Temperatur')
# get measurements from a single box on the from the last 40 days.
# requests are paged for long time frames, so the APIs limitation
# does not apply!
library(lubridate)
m1 = osem_measurements(
b,
'Temperatur',
to = now(),
from = now() - days(40)
)
}
} }
\seealso{ \seealso{
\href{https://docs.opensensemap.org/#api-Measurements-getDataMulti}{openSenseMap API documentation (web)} \href{https://docs.opensensemap.org/#api-Measurements-getDataMulti}{openSenseMap API documentation (web)}
\code{\link{osem_box}}
\code{\link{osem_boxes}} \code{\link{osem_boxes}}
\code{\link{osem_clear_cache}}
} }

View file

@ -0,0 +1,75 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/archive.R
\name{osem_measurements_archive}
\alias{osem_measurements_archive}
\alias{osem_measurements_archive.sensebox}
\title{Fetch day-wise measurements for a single box from the openSenseMap archive.}
\usage{
osem_measurements_archive(x, ...)
\method{osem_measurements_archive}{sensebox}(
x,
fromDate,
toDate = fromDate,
sensorFilter = ~TRUE,
...,
progress = TRUE
)
}
\arguments{
\item{x}{A `sensebox data.frame` of a single box, as retrieved via \code{\link{osem_box}},
to download measurements for.}
\item{...}{see parameters below}
\item{fromDate}{Start date for measurement download, must be convertable via `as.Date`.}
\item{toDate}{End date for measurement download (inclusive).}
\item{sensorFilter}{A NSE formula matching to \code{x$sensors}, selecting a subset of sensors.}
\item{progress}{Whether to print download progress information, defaults to \code{TRUE}.}
}
\value{
A \code{tbl_df} containing observations of all selected sensors for each time stamp.
}
\description{
This function is significantly faster than \code{\link{osem_measurements}} for large
time-frames, as daily CSV dumps for each sensor from
\href{https://archive.opensensemap.org}{archive.opensensemap.org} are used.
Note that the latest data available is from the previous day.
}
\details{
By default, data for all sensors of a box is fetched, but you can select a
subset with a \code{\link[dplyr]{dplyr}}-style NSE filter expression.
The function will warn when no data is available in the selected period,
but continue the remaining download.
}
\section{Methods (by class)}{
\itemize{
\item \code{osem_measurements_archive(sensebox)}: Get daywise measurements for one or more sensors of a single box.
}}
\examples{
\donttest{
# fetch measurements for a single day
box = osem_box('593bcd656ccf3b0011791f5a')
m = osem_measurements_archive(box, as.POSIXlt('2018-09-13'))
# fetch measurements for a date range and selected sensors
sensors = ~ phenomenon \%in\% c('Temperatur', 'Beleuchtungsstärke')
m = osem_measurements_archive(
box,
as.POSIXlt('2018-09-01'), as.POSIXlt('2018-09-30'),
sensorFilter = sensors
)
}
}
\seealso{
\href{https://archive.opensensemap.org}{openSenseMap archive}
\code{\link{osem_measurements}}
\code{\link{osem_box}}
}

View file

@ -21,23 +21,24 @@ Get the counts of sensors for each observed phenomenon.
} }
\section{Methods (by class)}{ \section{Methods (by class)}{
\itemize{ \itemize{
\item \code{sensebox}: Get counts of sensors observing each phenomenon \item \code{osem_phenomena(sensebox)}: Get counts of sensors observing each phenomenon
from a set of senseBoxes. from a set of senseBoxes.
}}
}}
\examples{ \examples{
# get the phenomena for a single senseBox # get the phenomena for a single senseBox
osem_phenomena(osem_box('593bcd656ccf3b0011791f5a')) osem_phenomena(osem_box('593bcd656ccf3b0011791f5a'))
# get the phenomena for a group of senseBoxes \donttest{
osem_phenomena( # get the phenomena for a group of senseBoxes
osem_boxes(grouptag = 'ifgi', exposure = 'outdoor', date = Sys.time()) osem_phenomena(
) osem_boxes(grouptag = 'ifgi', exposure = 'outdoor', date = Sys.time())
)
# get phenomena with at least 10 sensors on opensensemap
phenoms = osem_phenomena(osem_boxes())
names(phenoms[phenoms > 9])
# get phenomena with at least 30 sensors on opensensemap
phenoms = osem_phenomena(osem_boxes())
names(phenoms[phenoms > 29])
}
} }
\seealso{ \seealso{
\code{\link{osem_boxes}} \code{\link{osem_boxes}}

View file

@ -0,0 +1,19 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/external_generics.R
\name{st_as_sf.osem_measurements}
\alias{st_as_sf.osem_measurements}
\title{Convert a \code{osem_measurements} dataframe to an \code{\link[sf]{st_sf}} object.}
\usage{
st_as_sf.osem_measurements(x, ...)
}
\arguments{
\item{x}{The object to convert}
\item{...}{maybe more objects to convert}
}
\value{
The object with an st_geometry column attached.
}
\description{
Convert a \code{osem_measurements} dataframe to an \code{\link[sf]{st_sf}} object.
}

19
man/st_as_sf.sensebox.Rd Normal file
View file

@ -0,0 +1,19 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/external_generics.R
\name{st_as_sf.sensebox}
\alias{st_as_sf.sensebox}
\title{Convert a \code{sensebox} dataframe to an \code{\link[sf]{st_sf}} object.}
\usage{
st_as_sf.sensebox(x, ...)
}
\arguments{
\item{x}{The object to convert}
\item{...}{maybe more objects to convert}
}
\value{
The object with an st_geometry column attached.
}
\description{
Convert a \code{sensebox} dataframe to an \code{\link[sf]{st_sf}} object.
}

4
tests/testthat.R Normal file
View file

@ -0,0 +1,4 @@
library('testthat')
library('opensensmapr')
test_check('opensensmapr')

8
tests/testthat/lint.R Normal file
View file

@ -0,0 +1,8 @@
if (requireNamespace('lintr', quietly = TRUE)) {
context('lints')
test_that('Package Style', {
skip_on_cran()
lintr::expect_lint_free()
})
}

View file

@ -0,0 +1,7 @@
context('API error handling')
test_that('unavailable API yields informative error message', {
expect_error({
osem_boxes(endpoint = 'example.zip')
}, 'The API at example.zip is currently not available')
})

View file

@ -0,0 +1,66 @@
source('testhelpers.R')
context('osem_box_to_archivename()')
try({
boxes = osem_boxes(grouptag = 'ifgi')
box = osem_box('593bcd656ccf3b0011791f5a')
})
test_that('osem_box_to_archive_name does the correct character replacements', {
b = data.frame(
name = 'aA1._- äß!"?$%&/',
X_id = 'UUID'
)
archivename = opensensmapr:::osem_box_to_archivename(b)
expect_equal(archivename, 'UUID-aA1._-__________')
})
test_that('osem_box_to_archive_name works for one box', {
check_api()
if (is.null(box)) skip('no box data could be fetched')
archivename = opensensmapr:::osem_box_to_archivename(box)
expect_length(archivename, 1)
expect_type(archivename, 'character')
})
test_that('osem_box_to_archive_name works for multiple boxes', {
check_api()
if (is.null(boxes)) skip('no box data available')
archivename = opensensmapr:::osem_box_to_archivename(boxes)
expect_length(archivename, nrow(boxes))
expect_type(archivename, 'character')
})
context('osem_measurements_archive()')
test_that('osem_measurements_archive works for one box', {
check_api()
if (is.null(box)) skip('no box data could be fetched')
m = osem_measurements_archive(box, as.POSIXlt('2018-08-08'))
expect_length(m, nrow(box$sensors[[1]]) + 1) # one column for each sensor + createdAt
expect_s3_class(m, c('data.frame'))
})
test_that('osem_measurements_archive sensorFilter works for one box', {
check_api()
if (is.null(box)) skip('no box data could be fetched')
m = osem_measurements_archive(box, as.POSIXlt('2018-08-08'), sensorFilter = ~ phenomenon == 'Temperatur')
expect_length(m, 2) # one column for Temperatur + createdAt
expect_s3_class(m, c('data.frame'))
})
test_that('osem_measurements_archive fails for multiple boxes', {
check_api()
if (is.null(boxes)) skip('no box data available')
expect_error(
osem_measurements_archive(boxes, as.POSIXlt('2018-08-08')),
'this function only works for exactly one senseBox!'
)
})

102
tests/testthat/test_box.R Normal file
View file

@ -0,0 +1,102 @@
source('testhelpers.R')
context('box')
try({
box = osem_box('57000b8745fd40c8196ad04c')
})
test_that('required box attributes are correctly parsed', {
check_api()
expect_is(box$X_id, 'character')
expect_is(box$name, 'character')
expect_is(box$exposure, 'character')
expect_is(box$model, 'character')
expect_is(box$lat, 'numeric')
expect_is(box$lon, 'numeric')
expect_is(box$createdAt, 'POSIXct')
})
test_that('optional box attributes are correctly parsed', {
check_api()
completebox = osem_box('5a676e49411a790019290f94') # all fields populated
expect_is(completebox$description, 'character')
expect_is(completebox$grouptag, 'character')
expect_is(completebox$weblink, 'character')
expect_is(completebox$updatedAt, 'POSIXct')
expect_is(completebox$lastMeasurement, 'POSIXct')
expect_is(completebox$height, c('numeric', 'integer'))
expect_is(completebox$phenomena, 'list')
expect_is(completebox$phenomena[[1]], 'character')
expect_is(completebox$sensors, 'list')
expect_is(completebox$sensors[[1]], 'data.frame')
# box with older schema, not recently updated..
oldbox = osem_box('539fec94a8341554157931d7')
expect_null(oldbox$description)
expect_null(oldbox$grouptag)
expect_null(oldbox$weblink)
expect_null(oldbox$height)
expect_null(oldbox$lastMeasurement)
})
test_that('unknown box throws', {
check_api()
expect_error(osem_box('asdf'))
expect_error(osem_box('57000b8745fd40c800000000'), 'not found')
})
test_that("print.sensebox filters important attributes for a single box", {
check_api()
msg = capture.output({
print(box)
})
expect_false(any(grepl('description', msg)), 'should filter attribute "description"')
})
test_that("summary.sensebox outputs all metrics for a single box", {
check_api()
msg = capture.output({
summary(box)
})
expect_true(any(grepl('sensors per box:', msg)))
expect_true(any(grepl('oldest box:', msg)))
expect_true(any(grepl('newest box:', msg)))
expect_true(any(grepl('\\$last_measurement_within', msg)))
expect_true(any(grepl('boxes by model:', msg)))
expect_true(any(grepl('boxes by exposure:', msg)))
expect_true(any(grepl('boxes total: 1', msg)))
})
test_that('requests can be cached', {
check_api()
osem_clear_cache(tempdir())
expect_length(list.files(tempdir(), pattern = 'osemcache\\..*\\.rds'), 0)
b = osem_box('57000b8745fd40c8196ad04c', cache = tempdir())
cacheFile = paste(
tempdir(),
opensensmapr:::osem_cache_filename('/boxes/57000b8745fd40c8196ad04c'),
sep = '/'
)
expect_true(file.exists(cacheFile))
expect_length(list.files(tempdir(), pattern = 'osemcache\\..*\\.rds'), 1)
# no download output (works only in interactive mode..)
out = capture.output({
b = osem_box('57000b8745fd40c8196ad04c', cache = tempdir())
})
expect_length(out, 0)
expect_length(length(list.files(tempdir(), pattern = 'osemcache\\..*\\.rds')), 1)
osem_clear_cache(tempdir())
expect_false(file.exists(cacheFile))
expect_length(list.files(tempdir(), pattern = 'osemcache\\..*\\.rds'), 0)
})

219
tests/testthat/test_boxes.R Normal file
View file

@ -0,0 +1,219 @@
source('testhelpers.R')
context('boxes')
try({
boxes = osem_boxes()
})
test_that('a list of all boxes can be retrieved and returns a sensebox data.frame', {
check_api()
expect_true(is.data.frame(boxes))
expect_true(is.factor(boxes$model))
expect_true(is.character(boxes$name))
expect_length(names(boxes), 18)
expect_true(any('sensebox' %in% class(boxes)))
})
test_that('both from and to are required when requesting boxes, error otherwise', {
expect_error(osem_boxes(from = as.POSIXct('2017-01-01')), 'must be used together')
expect_error(osem_boxes(to = as.POSIXct('2017-01-01')), 'must be used together')
})
test_that('a list of boxes with phenomenon filter returns only the requested phenomenon', {
check_api()
boxes_phen = osem_boxes(phenomenon = 'Temperatur', date = Sys.time())
expect_true(all(grep('Temperatur', boxes_phen$phenomena)))
})
test_that('a list of boxes with exposure filter returns only the requested exposure', {
check_api()
boxes_exp = osem_boxes(exposure = 'mobile')
expect_true(all(boxes_exp$exposure == 'mobile'))
})
test_that('a list of boxes with model filter returns only the requested model', {
check_api()
boxes_mod = osem_boxes(model = 'homeWifi')
expect_true(all(boxes_mod$model == 'homeWifi'))
})
test_that('box query can combine exposure and model filter', {
check_api()
boxes_com = osem_boxes(exposure = 'mobile', model = 'homeWifi')
expect_true(all(boxes_com$model == 'homeWifi'))
expect_true(all(boxes_com$exposure == 'mobile'))
})
test_that('a list of boxes with grouptype returns only boxes of that group', {
check_api()
boxes_gro = osem_boxes(grouptag = 'codeformuenster')
expect_true(all(boxes_gro$grouptag == 'codeformuenster'))
})
test_that('a list of boxes within a bbox only returns boxes within that bbox', {
check_api()
boxes_box = osem_boxes(bbox = c(7.8, 51.8, 8.0, 52.0))
expect_true(all(boxes_box$lon > 7.8 & boxes_box$lon < 8.0 & boxes_box$lat > 51.8 & boxes_box$lat < 52.0))
})
test_that('endpoint can be (mis)configured', {
check_api()
expect_error(osem_boxes(endpoint = 'http://not.the.opensensemap.org'), 'The API at http://not.the.opensensemap.org is currently not available.')
})
test_that('a response with no matches returns empty sensebox data.frame', {
check_api()
suppressWarnings({
boxes_gro = osem_boxes(grouptag = 'does_not_exist')
})
expect_true(is.data.frame(boxes_gro))
expect_true(any('sensebox' %in% class(boxes_gro)))
})
test_that('a response with no matches gives a warning', {
check_api()
expect_warning(osem_boxes(grouptag = 'does_not_exist'), 'no senseBoxes found')
})
test_that('data.frame can be converted to sensebox data.frame', {
df = osem_as_sensebox(data.frame(c(1, 2), c('a', 'b')))
expect_equal(class(df), c('sensebox', 'data.frame'))
})
test_that('boxes can be converted to sf object', {
check_api()
# boxes = osem_boxes()
boxes_sf = sf::st_as_sf(boxes)
expect_true(all(sf::st_is_simple(boxes_sf)))
expect_true('sf' %in% class(boxes_sf))
})
test_that('boxes converted to sf object keep all attributes', {
check_api()
# boxes = osem_boxes()
boxes_sf = sf::st_as_sf(boxes)
# coord columns get removed!
cols = names(boxes)[!names(boxes) %in% c('lon', 'lat')]
expect_true(all(cols %in% names(boxes_sf)))
expect_true('sensebox' %in% class(boxes_sf))
})
test_that('box retrieval does not give progress information in non-interactive mode', {
check_api()
if (!opensensmapr:::is_non_interactive()) skip('interactive session')
out = capture.output({
b = osem_boxes()
})
expect_length(out, 0)
})
test_that('print.sensebox filters important attributes for a set of boxes', {
check_api()
# boxes = osem_boxes()
msg = capture.output({
print(boxes)
})
expect_false(any(grepl('description', msg)), 'should filter attribute "description"')
})
test_that('summary.sensebox outputs all metrics for a set of boxes', {
check_api()
# boxes = osem_boxes()
msg = capture.output({
summary(boxes)
})
expect_true(any(grepl('sensors per box:', msg)))
expect_true(any(grepl('oldest box:', msg)))
expect_true(any(grepl('newest box:', msg)))
expect_true(any(grepl('\\$last_measurement_within', msg)))
expect_true(any(grepl('boxes by model:', msg)))
expect_true(any(grepl('boxes by exposure:', msg)))
expect_true(any(grepl('boxes total:', msg)))
})
test_that('requests can be cached', {
check_api()
osem_clear_cache()
expect_length(list.files(tempdir(), pattern = 'osemcache\\..*\\.rds'), 0)
b = osem_boxes(cache = tempdir())
cacheFile = paste(
tempdir(),
opensensmapr:::osem_cache_filename('/boxes'),
sep = '/'
)
expect_true(file.exists(cacheFile))
expect_length(list.files(tempdir(), pattern = 'osemcache\\..*\\.rds'), 1)
# no download output (works only in interactive mode..)
out = capture.output({
b = osem_boxes(cache = tempdir())
})
expect_length(out, 0)
expect_length(list.files(tempdir(), pattern = 'osemcache\\..*\\.rds'), 1)
osem_clear_cache()
expect_length(list.files(tempdir(), pattern = 'osemcache\\..*\\.rds'), 0)
})
context('single box from boxes')
test_that('a single box can be retrieved by ID', {
check_api()
box = osem_box(boxes$X_id[[1]])
expect_true('sensebox' %in% class(box))
expect_true('data.frame' %in% class(box))
expect_true(nrow(box) == 1)
expect_true(box$X_id == boxes$X_id[[1]])
expect_silent(osem_box(boxes$X_id[[1]]))
})
test_that('[.sensebox maintains attributes', {
check_api()
expect_true(all(attributes(boxes[1:nrow(boxes), ]) %in% attributes(boxes)))
})
context('measurements boxes')
test_that('measurements of specific boxes can be retrieved for one phenomenon and returns a measurements data.frame', {
check_api()
# fix for subsetting
class(boxes) = c('data.frame')
three_boxes = boxes[1:3, ]
class(boxes) = c('sensebox', 'data.frame')
three_boxes = osem_as_sensebox(three_boxes)
phens = names(osem_phenomena(three_boxes))
measurements = osem_measurements(x = three_boxes, phenomenon = phens[[1]])
expect_true(is.data.frame(measurements))
expect_true('osem_measurements' %in% class(measurements))
})
test_that('phenomenon is required when requesting measurements, error otherwise', {
check_api()
expect_error(osem_measurements(boxes), 'Parameter "phenomenon" is required')
})

View file

@ -0,0 +1,38 @@
source('testhelpers.R')
context('counts')
test_that('counts can be retrieved as a list of numbers', {
check_api()
counts = osem_counts()
expect_true(is.list(counts))
expect_true(is.numeric(unlist(counts)))
expect_length(counts, 3)
})
test_that('requests can be cached', {
check_api()
osem_clear_cache()
expect_length(list.files(tempdir(), pattern = 'osemcache\\..*\\.rds'), 0)
c = osem_counts(cache = tempdir())
cacheFile = paste(
tempdir(),
opensensmapr:::osem_cache_filename('/stats'),
sep = '/'
)
expect_true(file.exists(cacheFile))
expect_length(list.files(tempdir(), pattern = 'osemcache\\..*\\.rds'), 1)
# no download output (works only in interactive mode..)
out = capture.output({
c = osem_counts(cache = tempdir())
})
expect_length(out, 0)
expect_length(list.files(tempdir(), pattern = 'osemcache\\..*\\.rds'), 1)
osem_clear_cache()
expect_length(list.files(tempdir(), pattern = 'osemcache\\..*\\.rds'), 0)
})

View file

@ -0,0 +1,132 @@
source('testhelpers.R')
context('measurements')
test_that('measurements can be retrieved for a phenomenon', {
check_api()
measurements = osem_measurements('Windgeschwindigkeit')
measurements = osem_measurements(x = 'Windgeschwindigkeit')
expect_true(tibble::is_tibble(measurements))
expect_true('osem_measurements' %in% class(measurements))
})
test_that('measurement retrieval does not give progress information in non-interactive mode', {
check_api()
if (!opensensmapr:::is_non_interactive()) skip('interactive session')
out = capture.output({
measurements = osem_measurements(x = 'Licht')
})
expect_length(out, 0)
})
test_that('a response with no matching senseBoxes gives an error', {
check_api()
expect_error(osem_measurements(x = 'foobar', exposure = 'indoor'), 'No senseBoxes found')
})
test_that('columns can be specified for phenomena', {
check_api()
cols = c('value', 'boxId', 'boxName')
measurements = osem_measurements(x = 'Windgeschwindigkeit', columns = cols)
expect_equal(names(measurements), cols)
})
test_that('measurements can be retrieved for a phenomenon and exposure', {
check_api()
measurements = osem_measurements(x = 'Temperatur', exposure = 'unknown',
columns = c('value', 'boxId', 'boxName'))
expect_equal(nrow(measurements), 0)
})
test_that('measurements can be retrieved for a bounding box', {
check_api()
sfc = sf::st_sfc(sf::st_linestring(x = matrix(data = c(7, 8, 50, 51), ncol = 2)), crs = 4326)
bbox = sf::st_bbox(sfc)
measurements = osem_measurements(x = bbox, phenomenon = 'Windrichtung')
expect_true(all(unique(measurements$lat) > 50))
expect_true(all(unique(measurements$lat) < 51))
expect_true(all(unique(measurements$lon) < 8))
expect_true(all(unique(measurements$lon) > 7))
})
test_that('measurements can be retrieved for a time period', {
check_api()
from_date = as.POSIXct('2018-01-01 12:00:00')
to_date = as.POSIXct('2018-01-01 13:00:00')
measurements = osem_measurements(x = 'Temperature', from = from_date, to = to_date)
expect_true(all(measurements$createdAt < to_date))
expect_true(all(measurements$createdAt > from_date))
})
test_that('measurements can be retrieved for a time period > 31 days', {
check_api()
from_date = as.POSIXct('2017-11-01 12:00:00')
to_date = as.POSIXct('2018-01-01 13:00:00')
measurements = osem_measurements(x = 'Windrichtung', from = from_date, to = to_date)
expect_true(all(measurements$createdAt < to_date))
expect_true(all(measurements$createdAt > from_date))
})
test_that('both from and to are required when requesting measurements, error otherwise', {
expect_error(osem_measurements(x = 'Temperature', from = as.POSIXct('2017-01-01')), 'only together with')
expect_error(osem_measurements(x = 'Temperature', to = as.POSIXct('2017-01-01')), 'only together with')
})
test_that('phenomenon is required when requesting measurements, error otherwise', {
check_api()
expect_error(osem_measurements())
sfc = sf::st_sfc(sf::st_linestring(x = matrix(data = c(7, 8, 50, 51), ncol = 2)), crs = 4326)
bbox = sf::st_bbox(sfc)
expect_error(osem_measurements(bbox), 'Parameter "phenomenon" is required')
})
test_that('[.osem_measurements maintains attributes', {
check_api()
sfc = sf::st_sfc(sf::st_linestring(x = matrix(data = c(7, 8, 50, 51), ncol = 2)), crs = 4326)
bbox = sf::st_bbox(sfc)
m = osem_measurements(x = bbox, phenomenon = 'Windrichtung')
expect_true(all(attributes(m[1:nrow(m), ]) %in% attributes(m)))
})
test_that('data.frame can be converted to measurements data.frame', {
check_api()
m = osem_measurements('Windrichtung')
df = osem_as_measurements(data.frame(c(1, 2), c('a', 'b')))
expect_equal(class(df), class(m))
})
test_that('requests can be cached', {
check_api()
osem_clear_cache()
expect_length(list.files(tempdir(), pattern = 'osemcache\\..*\\.rds'), 0)
osem_measurements('Windrichtung', cache = tempdir())
expect_length(list.files(tempdir(), pattern = 'osemcache\\..*\\.rds'), 1)
# no download output (works only in interactive mode..)
out = capture.output({
m = osem_measurements('Windrichtung', cache = tempdir())
})
expect_length(out, 0)
expect_length(list.files(tempdir(), pattern = 'osemcache\\..*\\.rds'), 1)
osem_clear_cache()
expect_length(list.files(tempdir(), pattern = 'osemcache\\..*\\.rds'), 0)
})

View file

@ -0,0 +1,35 @@
source('testhelpers.R')
context('phenomena')
try({
boxes = osem_boxes()
all_phen = unique(unlist(boxes$phenomena))
})
test_that('phenomena from boxes is a list of counts', {
check_api()
phenomena = osem_phenomena(boxes)
expect_true(is.numeric(unlist(phenomena)))
expect_true(is.list(phenomena))
})
test_that('phenomena from boxes has all phenomena', {
check_api()
phenomena = osem_phenomena(boxes)
expect_true(all(all_phen %in% names(phenomena)))
expect_true(all(names(phenomena) %in% all_phen))
})
test_that('phenomena from a not sensebox data.frame returns error', {
check_api()
expect_error(osem_phenomena(list()), 'no applicable method')
expect_error(osem_phenomena(data.frame()), 'no applicable method')
boxes_df = boxes
class(boxes_df) = c('data.frame')
expect_error(osem_phenomena(boxes_df), 'no applicable method')
})

View file

@ -0,0 +1,9 @@
check_api = function() {
skip_on_cran()
code = NA
try({
code = httr::status_code(httr::GET(osem_endpoint()))
})
if (is.na(code)) skip('API not available')
}

View file

@ -1,25 +0,0 @@
# osem-monitor
Get the state of sensors and measurement counts for later analysis every 10 minutes.
The dataframe will reside in `./data/.RData`.
Further analysis can be done with the script `analyze.R`.
## docker image
```bash
# build
docker build --tag osem-monitor .
# run
docker run -v $(pwd)/data:/script/data osem-monitor
```
## run manually
```bash
# install dependencies once
Rscript -e 'install.packages(c("dplyr", "magrittr", "devtools"))'
Rscript -e 'devtools::install_github("noerw/opensensmapR")'
Rscript --save --restore osem-monitor.R
```

View file

@ -7,7 +7,7 @@ RUN apt-get update && \
RUN Rscript -e 'install.packages("sf")' RUN Rscript -e 'install.packages("sf")'
RUN Rscript -e 'install.packages("magrittr")' RUN Rscript -e 'install.packages("magrittr")'
RUN Rscript -e 'install.packages("devtools")' RUN Rscript -e 'install.packages("devtools")'
RUN Rscript -e 'devtools::install_github("noerw/opensensmapR")' RUN Rscript -e 'devtools::install_github("sensebox/opensensmapR")'
# install crontab # install crontab
COPY crontab /crontab COPY crontab /crontab

28
tools/monitr/README.md Normal file
View file

@ -0,0 +1,28 @@
# osem-monitr
Get the state of sensors and measurement counts for later analysis every 15 minutes.
The dataframes will reside in `./data/.RData`.
Further analysis can be done with the script `analyz.R`.
## docker image
```bash
alias dockr='docker '
# build
docker build --tag osem-monitr .
# run
docker run -v $(pwd)/data:/script/data osem-monitr
```
## run manually
```bash
# install dependencies once
Rscript -e 'install.packages(c("dplyr", "magrittr", "devtools"))'
Rscript -e 'devtools::install_github("sensebox/opensensmapR")'
Rscript --save --restore get-counts.R
Rscript --save --restore get-boxes.R
```

View file

@ -15,7 +15,7 @@ neverActive = b[is.na(b$lastMeasurement), ] %>% nrow()
b_agg = data.frame(time = Sys.time(), boxcount = nrow(b)) b_agg = data.frame(time = Sys.time(), boxcount = nrow(b))
b_agg$model = b$model %>% table() %>% as.list() %>% list() b_agg$model = b$model %>% table() %>% as.list() %>% list()
b_agg$exposure = b$exposure %>% table() %>% as.list() %>% list() b_agg$exposure = b$exposure %>% table() %>% as.list() %>% list()
b_agg$geometry = b %>% osem_as_sf() %>% st_geometry() %>% list() b_agg$geometry = b %>% st_as_sf() %>% st_geometry() %>% list()
b_agg$phenomena = b %>% osem_phenomena() %>% list() b_agg$phenomena = b %>% osem_phenomena() %>% list()
b_agg$active = list( b_agg$active = list(

Binary file not shown.

246
vignettes/osem-history.Rmd Normal file
View file

@ -0,0 +1,246 @@
---
title: "Visualising the History of openSenseMap.org"
author: "Norwin Roosen"
date: '`r Sys.Date()`'
output:
rmarkdown::html_vignette:
df_print: kable
fig_height: 5
fig_width: 7
toc: yes
html_document:
code_folding: hide
df_print: kable
theme: lumen
toc: yes
toc_float: yes
vignette: >
%\VignetteIndexEntry{Visualising the History of openSenseMap.org}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
> This vignette serves as an example on data wrangling & visualization with
`opensensmapr`, `dplyr` and `ggplot2`.
```{r setup, results='hide', message=FALSE, warning=FALSE}
# required packages:
library(opensensmapr) # data download
library(dplyr) # data wrangling
library(ggplot2) # plotting
library(lubridate) # date arithmetic
library(zoo) # rollmean()
```
openSenseMap.org has grown quite a bit in the last years; it would be interesting
to see how we got to the current `r osem_counts()$boxes` sensor stations,
split up by various attributes of the boxes.
While `opensensmapr` provides extensive methods of filtering boxes by attributes
on the server, we do the filtering within R to save time and gain flexibility.
So the first step is to retrieve *all the boxes*:
```{r download}
# if you want to see results for a specific subset of boxes,
# just specify a filter such as grouptag='ifgi' here
# boxes = osem_boxes(cache = '.')
boxes = readRDS('boxes_precomputed.rds') # read precomputed file to save resources
```
# Plot count of boxes by time {.tabset}
By looking at the `createdAt` attribute of each box we know the exact time a box
was registered.
With this approach we have no information about boxes that were deleted in the
meantime, but that's okay for now.
## ...and exposure
```{r exposure_counts, message=FALSE}
exposure_counts = boxes %>%
group_by(exposure) %>%
mutate(count = row_number(createdAt))
exposure_colors = c(indoor = 'red', outdoor = 'lightgreen', mobile = 'blue', unknown = 'darkgrey')
ggplot(exposure_counts, aes(x = createdAt, y = count, colour = exposure)) +
geom_line() +
scale_colour_manual(values = exposure_colors) +
xlab('Registration Date') + ylab('senseBox count')
```
Outdoor boxes are growing *fast*!
We can also see the introduction of `mobile` sensor "stations" in 2017. While
mobile boxes are still few, we can expect a quick rise in 2018 once the new
senseBox MCU with GPS support is released.
Let's have a quick summary:
```{r exposure_summary}
exposure_counts %>%
summarise(
oldest = min(createdAt),
newest = max(createdAt),
count = max(count)
) %>%
arrange(desc(count))
```
## ...and grouptag
We can try to find out where the increases in growth came from, by analysing the
box count by grouptag.
Caveats: Only a small subset of boxes has a grouptag, and we should assume
that these groups are actually bigger. Also, we can see that grouptag naming is
inconsistent (`Luftdaten`, `luftdaten.info`, ...)
```{r grouptag_counts, message=FALSE}
grouptag_counts = boxes %>%
group_by(grouptag) %>%
# only include grouptags with 8 or more members
filter(length(grouptag) >= 8 & !is.na(grouptag)) %>%
mutate(count = row_number(createdAt))
# helper for sorting the grouptags by boxcount
sortLvls = function(oldFactor, ascending = TRUE) {
lvls = table(oldFactor) %>% sort(., decreasing = !ascending) %>% names()
factor(oldFactor, levels = lvls)
}
grouptag_counts$grouptag = sortLvls(grouptag_counts$grouptag, ascending = FALSE)
ggplot(grouptag_counts, aes(x = createdAt, y = count, colour = grouptag)) +
geom_line(aes(group = grouptag)) +
xlab('Registration Date') + ylab('senseBox count')
```
```{r grouptag_summary}
grouptag_counts %>%
summarise(
oldest = min(createdAt),
newest = max(createdAt),
count = max(count)
) %>%
arrange(desc(count))
```
# Plot rate of growth and inactivity per week
First we group the boxes by `createdAt` into bins of one week:
```{r growthrate_registered, warning=FALSE, message=FALSE, results='hide'}
bins = 'week'
mvavg_bins = 6
growth = boxes %>%
mutate(week = cut(as.Date(createdAt), breaks = bins)) %>%
group_by(week) %>%
summarize(count = length(week)) %>%
mutate(event = 'registered')
```
We can do the same for `updatedAt`, which informs us about the last change to
a box, including uploaded measurements.
This method of determining inactive boxes is fairly inaccurate and should be
considered an approximation, because we have no information about intermediate
inactive phases.
Also deleted boxes would probably have a big impact here.
```{r growthrate_inactive, warning=FALSE, message=FALSE, results='hide'}
inactive = boxes %>%
# remove boxes that were updated in the last two days,
# b/c any box becomes inactive at some point by definition of updatedAt
filter(updatedAt < now() - days(2)) %>%
mutate(week = cut(as.Date(updatedAt), breaks = bins)) %>%
group_by(week) %>%
summarize(count = length(week)) %>%
mutate(event = 'inactive')
```
Now we can combine both datasets for plotting:
```{r growthrate, warning=FALSE, message=FALSE, results='hide'}
boxes_by_date = bind_rows(growth, inactive) %>% group_by(event)
ggplot(boxes_by_date, aes(x = as.Date(week), colour = event)) +
xlab('Time') + ylab(paste('rate per ', bins)) +
scale_x_date(date_breaks="years", date_labels="%Y") +
scale_colour_manual(values = c(registered = 'lightgreen', inactive = 'grey')) +
geom_point(aes(y = count), size = 0.5) +
# moving average, make first and last value NA (to ensure identical length of vectors)
geom_line(aes(y = rollmean(count, mvavg_bins, fill = list(NA, NULL, NA))))
```
We see a sudden rise in early 2017, which lines up with the fast growing grouptag `Luftdaten`.
This was enabled by an integration of openSenseMap.org into the firmware of the
air quality monitoring project [luftdaten.info](https://sensor.community/de/).
The dips in mid 2017 and early 2018 could possibly be explained by production/delivery issues
of the senseBox hardware, but I have no data on the exact time frames to verify.
# Plot duration of boxes being active {.tabset}
While we are looking at `createdAt` and `updatedAt`, we can also extract the duration of activity
of each box, and look at metrics by exposure and grouptag once more:
## ...by exposure
```{r exposure_duration, message=FALSE}
duration = boxes %>%
group_by(exposure) %>%
filter(!is.na(updatedAt)) %>%
mutate(duration = difftime(updatedAt, createdAt, units='days'))
ggplot(duration, aes(x = exposure, y = duration)) +
geom_boxplot() +
coord_flip() + ylab('Duration active in Days')
```
The time of activity averages at only `r round(mean(duration$duration))` days,
though there are boxes with `r round(max(duration$duration))` days of activity,
spanning a large chunk of openSenseMap's existence.
## ...by grouptag
```{r grouptag_duration, message=FALSE}
duration = boxes %>%
group_by(grouptag) %>%
# only include grouptags with 8 or more members
filter(length(grouptag) >= 8 & !is.na(grouptag) & !is.na(updatedAt)) %>%
mutate(duration = difftime(updatedAt, createdAt, units='days'))
ggplot(duration, aes(x = grouptag, y = duration)) +
geom_boxplot() +
coord_flip() + ylab('Duration active in Days')
duration %>%
summarize(
duration_avg = round(mean(duration)),
duration_min = round(min(duration)),
duration_max = round(max(duration)),
oldest_box = round(max(difftime(now(), createdAt, units='days')))
) %>%
arrange(desc(duration_avg))
```
The time of activity averages at only `r round(mean(duration$duration))` days,
though there are boxes with `r round(max(duration$duration))` days of activity,
spanning a large chunk of openSenseMap's existence.
## ...by year of registration
This is less useful, as older boxes are active for a longer time by definition.
If you have an idea how to compensate for that, please send a [Pull Request][PR]!
```{r year_duration, message=FALSE}
# NOTE: boxes older than 2016 missing due to missing updatedAt in database
duration = boxes %>%
mutate(year = cut(as.Date(createdAt), breaks = 'year')) %>%
group_by(year) %>%
filter(!is.na(updatedAt)) %>%
mutate(duration = difftime(updatedAt, createdAt, units='days'))
ggplot(duration, aes(x = substr(as.character(year), 0, 4), y = duration)) +
geom_boxplot() +
coord_flip() + ylab('Duration active in Days') + xlab('Year of Registration')
```
# More Visualisations
Other visualisations come to mind, and are left as an exercise to the reader.
If you implemented some, feel free to add them to this vignette via a [Pull Request][PR].
* growth by phenomenon
* growth by location -> (interactive) map
* set inactive rate in relation to total box count
* filter timespans with big dips in growth rate, and extrapolate the amount of
senseBoxes that could be on the platform today, assuming there were no production issues ;)
[PR]: https://github.com/sensebox/opensensmapr/pulls

View file

@ -0,0 +1,297 @@
---
title: "Visualising the Development of openSenseMap.org in 2022"
author: "Jan Stenkamp"
date: '`r Sys.Date()`'
output:
html_document:
code_folding: hide
df_print: kable
theme: lumen
toc: yes
toc_float: yes
rmarkdown::html_vignette:
df_print: kable
fig_height: 5
fig_width: 7
toc: yes
vignette: >
%\VignetteIndexEntry{Visualising the Development of openSenseMap.org in 2022}
%\VignetteEncoding{UTF-8}
%\VignetteEngine{knitr::rmarkdown}
---
> This vignette serves as an example on data wrangling & visualization with
`opensensmapr`, `dplyr` and `ggplot2`.
```{r setup, results='hide', message=FALSE, warning=FALSE}
# required packages:
library(opensensmapr) # data download
library(dplyr) # data wrangling
library(ggplot2) # plotting
library(lubridate) # date arithmetic
library(zoo) # rollmean()
```
openSenseMap.org has grown quite a bit in the last years; it would be interesting
to see how we got to the current `r osem_counts()$boxes` sensor stations,
split up by various attributes of the boxes.
While `opensensmapr` provides extensive methods of filtering boxes by attributes
on the server, we do the filtering within R to save time and gain flexibility.
So the first step is to retrieve *all the boxes*.
```{r download, results='hide', message=FALSE, warning=FALSE}
# if you want to see results for a specific subset of boxes,
# just specify a filter such as grouptag='ifgi' here
# boxes = osem_boxes(cache = '.')
boxes = readRDS('boxes_precomputed.rds') # read precomputed file to save resources
```
# Introduction
In the following we just want to have a look at the boxes created in 2022, so we filter for them.
```{r}
boxes = filter(boxes, locationtimestamp >= "2022-01-01" & locationtimestamp <="2022-12-31")
summary(boxes) -> summary.data.frame
```
<!-- This gives a good overview already: As of writing this, there are more than 11,000 -->
<!-- sensor stations, of which ~30% are currently running. Most of them are placed -->
<!-- outdoors and have around 5 sensors each. -->
<!-- The oldest station is from August 2016, while the latest station was registered a -->
<!-- couple of minutes ago. -->
Another feature of interest is the spatial distribution of the boxes: `plot()`
can help us out here. This function requires a bunch of optional dependencies though.
```{r, message=FALSE, warning=FALSE}
plot(boxes)
```
But what do these sensor stations actually measure? Lets find out.
`osem_phenomena()` gives us a named list of of the counts of each observed
phenomenon for the given set of sensor stations:
```{r}
phenoms = osem_phenomena(boxes)
str(phenoms)
```
Thats quite some noise there, with many phenomena being measured by a single
sensor only, or many duplicated phenomena due to slightly different spellings.
We should clean that up, but for now let's just filter out the noise and find
those phenomena with high sensor numbers:
```{r}
phenoms[phenoms > 50]
```
# Plot count of boxes by time {.tabset}
By looking at the `createdAt` attribute of each box we know the exact time a box
was registered. Because of some database migration issues the `createdAt` values are mostly wrong (~80% of boxes created 2022-03-30), so we are using the `timestamp` attribute of the `currentlocation` which should in most cases correspond to the creation date.
With this approach we have no information about boxes that were deleted in the
meantime, but that's okay for now.
## ...and exposure
```{r exposure_counts, message=FALSE}
exposure_counts = boxes %>%
group_by(exposure) %>%
mutate(count = row_number(locationtimestamp))
exposure_colors = c(indoor = 'red', outdoor = 'lightgreen', mobile = 'blue', unknown = 'darkgrey')
ggplot(exposure_counts, aes(x = locationtimestamp, y = count, colour = exposure)) +
geom_line() +
scale_colour_manual(values = exposure_colors) +
xlab('Registration Date') + ylab('senseBox count')
```
Outdoor boxes are growing *fast*!
We can also see the introduction of `mobile` sensor "stations" in 2017.
Let's have a quick summary:
```{r exposure_summary}
exposure_counts %>%
summarise(
oldest = min(locationtimestamp),
newest = max(locationtimestamp),
count = max(count)
) %>%
arrange(desc(count))
```
## ...and grouptag
We can try to find out where the increases in growth came from, by analysing the
box count by grouptag.
Caveats: Only a small subset of boxes has a grouptag, and we should assume
that these groups are actually bigger. Also, we can see that grouptag naming is
inconsistent (`Luftdaten`, `luftdaten.info`, ...)
```{r grouptag_counts, message=FALSE}
grouptag_counts = boxes %>%
group_by(grouptag) %>%
# only include grouptags with 15 or more members
filter(length(grouptag) >= 15 & !is.na(grouptag) & grouptag != '') %>%
mutate(count = row_number(locationtimestamp))
# helper for sorting the grouptags by boxcount
sortLvls = function(oldFactor, ascending = TRUE) {
lvls = table(oldFactor) %>% sort(., decreasing = !ascending) %>% names()
factor(oldFactor, levels = lvls)
}
grouptag_counts$grouptag = sortLvls(grouptag_counts$grouptag, ascending = FALSE)
ggplot(grouptag_counts, aes(x = locationtimestamp, y = count, colour = grouptag)) +
geom_line(aes(group = grouptag)) +
xlab('Registration Date') + ylab('senseBox count')
```
```{r grouptag_summary}
grouptag_counts %>%
summarise(
oldest = min(locationtimestamp),
newest = max(locationtimestamp),
count = max(count)
) %>%
arrange(desc(count))
```
# Plot rate of growth and inactivity per week
First we group the boxes by `locationtimestamp` into bins of one week:
```{r growthrate_registered, warning=FALSE, message=FALSE, results='hide'}
bins = 'week'
mvavg_bins = 6
growth = boxes %>%
mutate(week = cut(as.Date(locationtimestamp), breaks = bins)) %>%
group_by(week) %>%
summarize(count = length(week)) %>%
mutate(event = 'registered')
```
We can do the same for `updatedAt`, which informs us about the last change to
a box, including uploaded measurements. As a lot of boxes were "updated" by the database
migration, many of them are updated at 2022-03-30, so we try to use the `lastMeasurement`
attribute instead of `updatedAt`. This leads to fewer boxes but also automatically excludes
boxes which were created but never made a measurement.
This method of determining inactive boxes is fairly inaccurate and should be
considered an approximation, because we have no information about intermediate
inactive phases.
Also deleted boxes would probably have a big impact here.
```{r growthrate_inactive, warning=FALSE, message=FALSE, results='hide'}
inactive = boxes %>%
# remove boxes that were updated in the last two days,
# b/c any box becomes inactive at some point by definition of updatedAt
filter(lastMeasurement < now() - days(2)) %>%
mutate(week = cut(as.Date(lastMeasurement), breaks = bins)) %>%
filter(as.Date(week) > as.Date("2021-12-31")) %>%
group_by(week) %>%
summarize(count = length(week)) %>%
mutate(event = 'inactive')
```
Now we can combine both datasets for plotting:
```{r growthrate, warning=FALSE, message=FALSE, results='hide'}
boxes_by_date = bind_rows(growth, inactive) %>% group_by(event)
ggplot(boxes_by_date, aes(x = as.Date(week), colour = event)) +
xlab('Time') + ylab(paste('rate per ', bins)) +
scale_x_date(date_breaks="years", date_labels="%Y") +
scale_colour_manual(values = c(registered = 'lightgreen', inactive = 'grey')) +
geom_point(aes(y = count), size = 0.5) +
# moving average, make first and last value NA (to ensure identical length of vectors)
geom_line(aes(y = rollmean(count, mvavg_bins, fill = list(NA, NULL, NA))))
```
And see in which weeks the most boxes become (in)active:
```{r table_mostregistrations}
boxes_by_date %>%
filter(count > 50) %>%
arrange(desc(count))
```
# Plot duration of boxes being active {.tabset}
While we are looking at `locationtimestamp` and `lastMeasurement`, we can also extract the duration of activity
of each box, and look at metrics by exposure and grouptag once more:
## ...by exposure
```{r exposure_duration, message=FALSE}
durations = boxes %>%
group_by(exposure) %>%
filter(!is.na(lastMeasurement)) %>%
mutate(duration = difftime(lastMeasurement, locationtimestamp, units='days')) %>%
filter(duration >= 0)
ggplot(durations, aes(x = exposure, y = duration)) +
geom_boxplot() +
coord_flip() + ylab('Duration active in Days')
```
The time of activity averages at only `r round(mean(durations$duration))` days,
though there are boxes with `r round(max(durations$duration))` days of activity,
spanning a large chunk of openSenseMap's existence.
## ...by grouptag
```{r grouptag_duration, message=FALSE}
durations = boxes %>%
filter(!is.na(lastMeasurement)) %>%
group_by(grouptag) %>%
# only include grouptags with 20 or more members
filter(length(grouptag) >= 15 & !is.na(grouptag) & !is.na(lastMeasurement)) %>%
mutate(duration = difftime(lastMeasurement, locationtimestamp, units='days')) %>%
filter(duration >= 0)
ggplot(durations, aes(x = grouptag, y = duration)) +
geom_boxplot() +
coord_flip() + ylab('Duration active in Days')
durations %>%
summarize(
duration_avg = round(mean(duration)),
duration_min = round(min(duration)),
duration_max = round(max(duration)),
oldest_box = round(max(difftime(now(), locationtimestamp, units='days')))
) %>%
arrange(desc(duration_avg))
```
The time of activity averages at only `r round(mean(durations$duration))` days,
though there are boxes with `r round(max(durations$duration))` days of activity,
spanning a large chunk of openSenseMap's existence.
## ...by year of registration
This is less useful, as older boxes are active for a longer time by definition.
If you have an idea how to compensate for that, please send a [Pull Request][PR]!
```{r year_duration, message=FALSE}
# NOTE: boxes older than 2016 missing due to missing updatedAt in database
duration = boxes %>%
mutate(year = cut(as.Date(locationtimestamp), breaks = 'year')) %>%
group_by(year) %>%
filter(!is.na(lastMeasurement)) %>%
mutate(duration = difftime(lastMeasurement, locationtimestamp, units='days')) %>%
filter(duration >= 0)
ggplot(duration, aes(x = substr(as.character(year), 0, 4), y = duration)) +
geom_boxplot() +
coord_flip() + ylab('Duration active in Days') + xlab('Year of Registration')
```
# More Visualisations
Other visualisations come to mind, and are left as an exercise to the reader.
If you implemented some, feel free to add them to this vignette via a [Pull Request][PR].
* growth by phenomenon
* growth by location -> (interactive) map
* set inactive rate in relation to total box count
* filter timespans with big dips in growth rate, and extrapolate the amount of
senseBoxes that could be on the platform today, assuming there were no production issues ;)
[PR]: https://github.com/sensebox/opensensmapr/pulls

View file

@ -1,5 +1,5 @@
--- ---
title: "Analyzing environmental sensor data from openSenseMap.org in R" title: "Exploring the openSenseMap Dataset"
author: "Norwin Roosen" author: "Norwin Roosen"
date: "`r Sys.Date()`" date: "`r Sys.Date()`"
output: output:
@ -8,7 +8,7 @@ output:
fig_width: 6 fig_width: 6
fig_height: 4 fig_height: 4
vignette: > vignette: >
%\VignetteIndexEntry{Analyzing environmental sensor data from openSenseMap.org in R} %\VignetteIndexEntry{Exploring the openSenseMap Dataset}
%\VignetteEngine{knitr::rmarkdown} %\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8} %\VignetteEncoding{UTF-8}
--- ---
@ -17,48 +17,38 @@ vignette: >
knitr::opts_chunk$set(echo = TRUE) knitr::opts_chunk$set(echo = TRUE)
``` ```
## Analyzing environmental sensor data from openSenseMap.org in R
This package provides data ingestion functions for almost any data stored on the This package provides data ingestion functions for almost any data stored on the
open data platform for environemental sensordata <https://opensensemap.org>. open data platform for environmental sensordata <https://opensensemap.org>.
Its main goals are to provide means for: Its main goals are to provide means for:
- big data analysis of the measurements stored on the platform - big data analysis of the measurements stored on the platform
- sensor metadata analysis (sensor counts, spatial distribution, temporal trends) - sensor metadata analysis (sensor counts, spatial distribution, temporal trends)
> *Please note:* The openSenseMap API is sometimes a bit unstable when streaming
long responses, which results in `curl` complaining about `Unexpected EOF`. This
bug is being worked on upstream. Meanwhile you have to retry the request when
this occurs.
### Exploring the dataset ### Exploring the dataset
Before we look at actual observations, lets get a grasp of the openSenseMap Before we look at actual observations, lets get a grasp of the openSenseMap
datasets' structure. datasets' structure.
```{r results = F} ```{r results = FALSE}
library(magrittr) library(magrittr)
library(opensensmapr) library(opensensmapr)
all_sensors = osem_boxes() # all_sensors = osem_boxes(cache = '.')
all_sensors = readRDS('boxes_precomputed.rds') # read precomputed file to save resources
``` ```
```{r} ```{r}
summary(all_sensors) summary(all_sensors)
``` ```
This gives a good overview already: As of writing this, there are more than 600 This gives a good overview already: As of writing this, there are more than 700
sensor stations, of which ~50% are currently running. Most of them are placed sensor stations, of which ~50% are currently running. Most of them are placed
outdoors and have around 5 sensors each. outdoors and have around 5 sensors each.
The oldest station is from May 2014, while the latest station was registered a The oldest station is from May 2014, while the latest station was registered a
couple of minutes ago. couple of minutes ago.
Another feature of interest is the spatial distribution of the boxes. `plot()` Another feature of interest is the spatial distribution of the boxes: `plot()`
can help us out here. This function requires a bunch of optional dependcies though. can help us out here. This function requires a bunch of optional dependencies though.
```{r message=F, warning=F}
if (!require('maps')) install.packages('maps')
if (!require('maptools')) install.packages('maptools')
if (!require('rgeos')) install.packages('rgeos')
```{r, message=FALSE, warning=FALSE}
plot(all_sensors) plot(all_sensors)
``` ```
@ -88,7 +78,7 @@ We should check how many sensor stations provide useful data: We want only those
boxes with a PM2.5 sensor, that are placed outdoors and are currently submitting boxes with a PM2.5 sensor, that are placed outdoors and are currently submitting
measurements: measurements:
```{r results = F} ```{r results = FALSE, eval=FALSE}
pm25_sensors = osem_boxes( pm25_sensors = osem_boxes(
exposure = 'outdoor', exposure = 'outdoor',
date = Sys.time(), # ±4 hours date = Sys.time(), # ±4 hours
@ -96,6 +86,8 @@ pm25_sensors = osem_boxes(
) )
``` ```
```{r} ```{r}
pm25_sensors = readRDS('pm25_sensors.rds') # read precomputed file to save resources
summary(pm25_sensors) summary(pm25_sensors)
plot(pm25_sensors) plot(pm25_sensors)
``` ```
@ -104,40 +96,61 @@ Thats still more than 200 measuring stations, we can work with that.
### Analyzing sensor data ### Analyzing sensor data
Having analyzed the available data sources, let's finally get some measurements. Having analyzed the available data sources, let's finally get some measurements.
We could call `osem_measurements(pm25_sensors)` now, however we are focussing on We could call `osem_measurements(pm25_sensors)` now, however we are focusing on
a restricted area of interest, the city of Berlin. a restricted area of interest, the city of Berlin.
Luckily we can get the measurements filtered by a bounding box: Luckily we can get the measurements filtered by a bounding box:
```{r} ```{r, results=FALSE, message=FALSE}
library(sf) library(sf)
library(units) library(units)
library(lubridate) library(lubridate)
library(dplyr)
```
Since the API takes quite long to response measurements, especially filtered on space and time, we do not run the following chunks for publication of the package on CRAN.
```{r bbox, results = FALSE, eval=FALSE}
# construct a bounding box: 12 kilometers around Berlin # construct a bounding box: 12 kilometers around Berlin
berlin = st_point(c(13.4034, 52.5120)) %>% berlin = st_point(c(13.4034, 52.5120)) %>%
st_sfc(crs = 4326) %>% st_sfc(crs = 4326) %>%
st_transform(3857) %>% # allow setting a buffer in meters st_transform(3857) %>% # allow setting a buffer in meters
st_buffer(units::set_units(12, km)) %>% st_buffer(set_units(12, km)) %>%
st_transform(4326) %>% # the opensensemap expects WGS 84 st_transform(4326) %>% # the opensensemap expects WGS 84
st_bbox() st_bbox()
```
```{r results = F}
pm25 = osem_measurements( pm25 = osem_measurements(
berlin, berlin,
phenomenon = 'PM2.5', phenomenon = 'PM2.5',
from = now() - days(7), # defaults to 2 days from = now() - days(3), # defaults to 2 days
to = now() to = now()
) )
```
```{r}
pm25 = readRDS('pm25_berlin.rds') # read precomputed file to save resources
plot(pm25) plot(pm25)
``` ```
Now we can get started with actual spatiotemporal data analysis. First plot the Now we can get started with actual spatiotemporal data analysis.
measuring locations: First, lets mask the seemingly uncalibrated sensors:
```{r} ```{r, warning=FALSE}
pm25_sf = osem_as_sf(pm25) outliers = filter(pm25, value > 100)$sensorId
plot(st_geometry(pm25_sf), axes = T) bad_sensors = outliers[, drop = TRUE] %>% levels()
pm25 = mutate(pm25, invalid = sensorId %in% bad_sensors)
``` ```
further analysis: `TODO` Then plot the measuring locations, flagging the outliers:
```{r}
st_as_sf(pm25) %>% st_geometry() %>% plot(col = factor(pm25$invalid), axes = TRUE)
```
Removing these sensors yields a nicer time series plot:
```{r}
pm25 %>% filter(invalid == FALSE) %>% plot()
```
Further analysis: comparison with LANUV data `TODO`

View file

@ -0,0 +1,106 @@
---
title: "Caching openSenseMap Data for Reproducibility"
author: "Norwin Roosen"
date: "`r Sys.Date()`"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Caching openSenseMap Data for Reproducibility}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
It may be useful to download data from openSenseMap only once.
For reproducible results, the data should be saved to disk, and reloaded at a
later point.
This avoids..
- changed results for queries without date parameters,
- unnecessary wait times,
- risk of API changes / API unavailability,
- stress on the openSenseMap-server.
This vignette shows how to use this built in `opensensmapr` feature, and
how to do it yourself in case you want to save to other data formats.
```{r setup, results='hide'}
# this vignette requires:
library(opensensmapr)
library(jsonlite)
library(readr)
```
## Using the opensensmapr Caching Feature
All data retrieval functions of `opensensmapr` have a built in caching feature,
which serializes an API response to disk.
Subsequent identical requests will then return the serialized data instead of making
another request.
To use this feature, just add a path to a directory to the `cache` parameter:
```{r cache}
b = osem_boxes(grouptag = 'ifgi', cache = tempdir())
# the next identical request will hit the cache only!
b = osem_boxes(grouptag = 'ifgi', cache = tempdir())
# requests without the cache parameter will still be performed normally
b = osem_boxes(grouptag = 'ifgi')
```
Looking at the cache directory we can see one file for each request, which is identified through a hash of the request URL:
```{r cachelisting}
list.files(tempdir(), pattern = 'osemcache\\..*\\.rds')
```
You can maintain multiple caches simultaneously which allows to only store data related to a script in the same directory:
```{r cache_custom}
cacheDir = getwd() # current working directory
b = osem_boxes(grouptag = 'ifgi', cache = cacheDir)
# the next identical request will hit the cache only!
b = osem_boxes(grouptag = 'ifgi', cache = cacheDir)
```
To get fresh results again, just call `osem_clear_cache()` for the respective cache:
```{r clearcache, results='hide'}
osem_clear_cache() # clears default cache
osem_clear_cache(getwd()) # clears a custom cache
```
## Custom (De-) Serialization
If you want to roll your own serialization method to support custom data formats,
here's how:
```{r data, results='hide', eval=FALSE}
# first get our example data:
measurements = osem_measurements('Windgeschwindigkeit')
```
If you are paranoid and worry about `.rds` files not being decodable anymore
in the (distant) future, you could serialize to a plain text format such as JSON.
This of course comes at the cost of storage space and performance.
```{r serialize_json, eval=FALSE}
# serializing senseBoxes to JSON, and loading from file again:
write(jsonlite::serializeJSON(measurements), 'measurements.json')
measurements_from_file = jsonlite::unserializeJSON(readr::read_file('measurements.json'))
class(measurements_from_file)
```
This method also persists the R object metadata (classes, attributes).
If you were to use a serialization method that can't persist object metadata, you
could re-apply it with the following functions:
```{r serialize_attrs, eval=FALSE}
# note the toJSON call instead of serializeJSON
write(jsonlite::toJSON(measurements), 'measurements_bad.json')
measurements_without_attrs = jsonlite::fromJSON('measurements_bad.json')
class(measurements_without_attrs)
measurements_with_attrs = osem_as_measurements(measurements_without_attrs)
class(measurements_with_attrs)
```
The same goes for boxes via `osem_as_sensebox()`.
```{r cleanup, include=FALSE, eval=FALSE}
file.remove('measurements.json', 'measurements_bad.json')
```

BIN
vignettes/pm25_berlin.rds Normal file

Binary file not shown.

BIN
vignettes/pm25_sensors.rds Normal file

Binary file not shown.