skip to Main Content

I have an R package that delivers learnr tutorials to users. The tutorials are essentially shiny apps. Precisely, they are runtime: shiny_prerendered interactive R Markdown documents.

When developing new tutorials and maintaining old ones, sometimes me and my team are introducing bugs in some tutorials. I would like to have a continuous integration via GitHub Actions which can simply check whether all of the present tutorials in the package can be rendered without errors.

I came to the point where I can run arbitrary R Code on GitHub Actions, but obviously that is not an interactive session. I know how to render the tutorials in interactive context, but how can I render them in the noninteractive context? And report which tutorials fail to render?

I have also heard of unit testing for shiny apps, but I do not know how to apply it to my situation. My unit test would be something like "test that learnr::run_tutorial("path/to/tutorial") has no errors.
When this function is run, it automatically handles a lot of stuff, but I don’t know how it works in non-interactive sessions. For example, it would be good to stop the tutorial shiny process, after it was rendered and run successfully.

Below is my YAML file in `.github/workflows:

on:
  push:
    branches: main

jobs:
  import-data:
    runs-on: ubuntu-latest
    steps:
      - name: Set up R
        uses: r-lib/actions/setup-r@v2

      - name: Install packages
        uses: r-lib/actions/setup-r-dependencies@v2
        with:
          packages: |
            any::tidyverse
            any::renv
            any::learnr
            any::shiny
            any::rmarkdown
            any::rcmdcheck
            any::rlang


      - name: Check out repository
        uses: actions/checkout@v3

      - name: Test render of tutorials
        run: Rscript -e 'source("./.github/rscripts/tutorial_action.R")'

And here is my R script that is run in the last step:

# Build and install the local package
devtools::install()

# 1. Install necessary dependencies
deps <- renv::dependencies("./inst/tutorials")$Package
installed <- installed.packages()
to_install <- deps[!(deps %in% installed)]
lapply(to_install, install.packages)

# 2. Get all possible tutorial paths

tutorial_paths <- list.files("inst/tutorials", recursive = TRUE)
tutorial_paths <- tutorial_paths[endsWith(tutorial_paths, ".Rmd")]
tutorial_paths <- tutorial_paths[!is.na(tutorial_paths)]
tutorial_paths <- paste0("inst/tutorials/", tutorial_paths)

# 3. Try to test the tutorials

options(shiny.testmode = TRUE)

for (i in seq_len(length(tutorial_paths))) {
  learnr::run_tutorial(tutorial_paths[i],
                       as_rstudio_job = FALSE)
  shiny::stopApp()
}

Everything is working until point 3, where I just randomly tried some stuff hoping it would work.

Edit: Minimal example of typical tutorial

---
title: "Tutorial"
output: learnr::tutorial
runtime: shiny_prerendered
---

```{r setup, include=FALSE}
library(learnr)
knitr::opts_chunk$set(echo = FALSE)
```

## Topic 1

### Exercise 

*Here's a simple exercise with an empty code chunk provided for entering the answer.*

Write the R code required to add two plus two:

```{r two-plus-two, exercise=TRUE}

```

2

Answers


  1. Chosen as BEST ANSWER

    The answer of @Leon Samson is the right way to actually test the shiny part of shiny prerendered documents. In my case, I do not develop the shiny part of the interactive documents, I only fill in text and code examples. So no unit testing of shiny parts is needed. I figured out that rmarkdown::render achieves everything I want - it throws an error when render does not work for some reason.

    Here is how I changed the last part (step 3) of the RScript I provided above:

    # 3. Test render each tutorial
    for (p in tutorial_paths) {
      rmarkdown::render(p)
    }
    

    That does all I want in an uncomplicated fashion and works on GitHub actions. To see the Action in action, have a look at it (GitHub).


  2. You should first setup your unit test environment locally and be able to successfully run (unit) tests on your local machine. First, create a test for one app that will pass successfully. Then, you should create the tests for each of your application.
    One way to do this, is to use the shinytest2 package. You should have the Chrome browser installed for this.

    Step 1. Your package should strictly follow the conventions of an R package. See here for more information. Running devtools::load_all() should be possible without errors. Notably, files in the R/ folder should only contain functions and should not give problems when you source them. You should at least have installed the packages devtools and testthat.

    Step 2. Setup shinytest2. Run the following:

    install.packages("shinytest2")
    # to add the package to your dependency:
    usethis::use_package("shinytest2", type = "Suggests")
    # For creating local temporary files that are deleted after testing: 
    usethis::use_package("withr", type = "Suggests")
    # setup testthat:
    usethis::use_testthat()
    # setup shinytest2:
    shinytest2::use_shinytest2()
    #create your first test:
    shinytest2::use_shinytest2_test()
    

    You package folder should now look something like this, if your package is called ‘testlearnr’:

    ├── DESCRIPTION
    ├── inst
    │   └── tutorial1.Rmd
    ├── man
    │   └── hello.Rd
    ├── NAMESPACE
    ├── R
    ├── testlearnr.Rproj
    └── tests
        ├── testthat
        │   ├── fixtures
        │   ├── setup-shinytest2.R
        │   ├── test-shinytest2.R
        └── testthat.R
    

    Step 3. modify the test file test-shinytest2.R.
    Add an appdriver as stated below for a minimal test (change the package name ‘testlearnr’ to your own package name).

    library(shinytest2)
    test_that("app1 works", {
      temp_file <- withr::local_tempfile(fileext = "temp_testapp.rmd")
      app_location <- system.file("tutorial1.Rmd", package = "testlearnr")
      file.copy(app_location, temp_file)
      app <- AppDriver$new(
        app_dir = temp_file,
        name = "app1-test-1",
        width = 1619,
        height = 955
      )
      app$wait_for_idle()
      app$expect_values()
    })
    

    In the script aboe, we copy the application to a temporary location that will be deleted after a test. This is to ensure that the test will not create any side effects (in this case: html files) because the app is embedded in a RMarkdown document.

    Step 4. Run the test with devtools::test_active_file(). Iterate. Ensure the app works within the test. Use browser() within the test to interactively debug. Use app$view() to interactively view the application. This should open a headless Chrome browser with the application running successfully. You should define your formal test here. app$expect_values() is a general statement that saves a sort of snapshot of the application’s state. After running once, the snapshots are saved. The next time the test should pass successfully.

    This should get you started. Once you are able to complete all these steps, you can run your tests as often as needed. Please read the information here for more information. This might already enough for you. My advice is to focus on making your tests robust and reliable, so that they test what is needed and that you can run the without problems. Once you have a robust automated testing structure, you can, if still required, use github actions for even more automated workflow as described here.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search