I am trying to copy a table from a webpage, there are going to be many as I am trying to get the versions of the data for each dataset, I am trying to get at least one table but failing. Scraping is not my thing, maybe it is obvious how to get it but not to me.
Here is my code:
url <- "https://data.cms.gov/provider-characteristics/medicare-provider-supplier-enrollment/medicare-fee-for-service-public-provider-enrollment/api-docs"
html <- rvest::read_html(url)
> html |> rvest::html_node(".table")
{xml_missing}
<NA>
And
> html |>
rvest::html_node(xpath = "/html/body/div/div/div/div/div/div/div[2]/div[2]/div/div/table/tbody")
{xml_missing}
<NA>
And
html |>
rvest::html_node("tbody")
The output showed that javascript needed to be enabled. The first answer works for the current dataset and the second answer I posted works to get most of the distributions where accessURL and title are present in the output.
2
Answers
I was also able to get most of what I want by doing the following:
Sample Output
Unfortunately this approach is not going to work. The tables in the page you’re looking at are generated via JavaScript. The
rvest::read_html(url)
call will retrieve the static content on that page but will not execute any (dynamic) JavaScript.But there is an API behind the site, so you can get the data directly from that. For example:
Alternatively you can use something like
{RSelenium}
to evaluate the JavaScript and then scrape the fully rendered page.