My goal is to extract all the href links from this page and find the .pdf links. I tried using the requests library and Selenium, but neither of them could extract it.
How can I solve this problem? Thank you.
Ex: This contain a .pdf file link
This is the request code:
import requests
from bs4 import BeautifulSoup
headers = {'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/113.0'}
url="https://www.bain.com/insights/topics/energy-and-natural-resources-report/"
response = requests.get(url, headers=headers)
soup = BeautifulSoup(response.text, 'html.parser')
for link in soup.find_all('a'):
print(link.get('href'))
This is the selenium code:
from selenium import webdriver
from selenium.webdriver.chrome.service import Service as ChromeService
from webdriver_manager.chrome import ChromeDriverManager
from bs4 import BeautifulSoup
options = webdriver.ChromeOptions()
driver = webdriver.Chrome(service=ChromeService(ChromeDriverManager().install()), options=options)
page_source = driver.get("https://www.bain.com/insights/topics/energy-and-natural-resource-report/")
driver.implicitly_wait(10)
soup = BeautifulSoup(page_source, 'html.parser')
for link in soup.find_all('a'):
print(link.get('href'))
driver.quit()
2
Answers
The HTML of the link is
You can use the CSS selector below to locate the download link,
This selector is just looking for an A tag that contains the string "pdf" in the href attribute.
I don’t know if this affects BeautifulSoup like it does Selenium but the link is in an IFRAME. The locator for the IFRAME is
Here is the python-requests version how to get all pdfs from that page:
Prints: