I’m trying to scrape a table from a webpage using Selenium and BeautifulSoup but I’m not sure how to get to the actual data using BeautifulSoup.
webpage: https://leetify.com/app/match-details/5c438e85-c31c-443a-8257-5872d89e548c/details-general
I tried extracting table rows (tag <tr>) but when I call find_all, the array is empty.
When I inspect element, I see several elements with a tr tag, why don’t they show up with BeautifulSoup.find_all() ??
I tried extracting table rows (tag <tr>) but when I call find_all, the array is empty.
Code:
from selenium import webdriver
from bs4 import BeautifulSoup
driver = webdriver.Chrome()
driver.get("https://leetify.com/app/match-details/5c438e85-c31c-443a-8257-5872d89e548c/details-general")
html_source = driver.page_source
soup = BeautifulSoup(html_source, 'html.parser')
table = soup.find_all("tbody")
print(len(table))
for entry in table:
print(entry)
print("n")
3
Answers
after taking a quick glance, it seems like it takes a long time for the page to load.
The thing is, when you pass the
driver.page_source
toBeautifulSoup
, not all the HTML/CSS is loaded yet.So, the solution would be to use an Explicit wait:
Wait until page is loaded with Selenium WebDriver for Python
or, even, (less recommended):
but I’m not 100% sure, since I don’t currently have Selenium installed on my machine
However, I’d like to take on a completely different solution:
If you take a look at your browsers Network calls (Click on F12 in your browser, and it’ll open the developer options), you’ll see that data (the table) your looking for, is loaded through sending a
GET
request the their API:The endpoint is under:
which you can view directly from your browser.
So, you can directly use the
requests
library to make a GET request to the above endpoint, which will be much more efficent:Prints (trucated):
This approach bypasses the need to wait for the page to load, allowing you to directly access the data.
You need to give the webpage, the time to load. Currently, on most websites, the data are filled using AJAX from the client. This below code can help, but it’s not the ideal way. You must add the logic to wait until a particular element is visible. Refer https://selenium-python.readthedocs.io/waits.html
You don’t need to use BeautifulSoup as everything can be done with the selenium module as follows:
The point about wait is that the page you’re scraping is JavaScript driven so you need to be sure that the table element (Tag) has been rendered before you try to analyse its contents.
The HTML content on this page is slightly unusual in that the table has more than one tbody element so you’ll probably want to handle that. The code in this answer simply emits the text from all/any tr elements in the table with no consideration for the tbody from whence they came