skip to Main Content

I am trying to build a python webscraper with beautifulsoup4. If I run the code on my Macbook the script works, but if I let the script run on my homeserver (ubuntu vm) I get the following error msg (see below). I tried a vpn connection and multiple headers without success.

Highly appreciate your feedback on how to get the script working. THANKS!

Here the error msg:

{'User-Agent': 'Mozilla/5.0 (Windows; U; Windows NT 5.2; en-US) AppleWebKit/534.7 (KHTML, like Gecko) Chrome/7.0.517.41 Safari/534.7 ChromePlus/1.5.0.0alpha1'}
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 699, in urlopen
    httplib_response = self._make_request(
  File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 445, in _make_request
    six.raise_from(e, None)
  File "<string>", line 3, in raise_from
  File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 440, in _make_request
    httplib_response = conn.getresponse()
  File "/usr/lib/python3.10/http/client.py", line 1374, in getresponse
    response.begin()
  File "/usr/lib/python3.10/http/client.py", line 318, in begin
    version, status, reason = self._read_status()
  File "/usr/lib/python3.10/http/client.py", line 287, in _read_status
    raise RemoteDisconnected("Remote end closed connection without"
http.client.RemoteDisconnected: Remote end closed connection without response

[...]

requests.exceptions.ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))
[Finished in 15.9s with exit code 1]

Here my code:

from bs4 import BeautifulSoup
import requests
import pyuser_agent

URL = f"https://www.edmunds.com/inventory/srp.html?radius=5000&sort=publishDate%3Adesc&pagenumber=2"

ua = pyuser_agent.UA()
headers = {'User-Agent': ua.random}
print(headers)

response = requests.get(url=URL, headers=headers)
soup = BeautifulSoup(response.text, 'lxml')
overview = soup.find()
print(overview)

I tried multiple headers, but do not get a result

2

Answers


  1. Try to use real web-browser User Agent instead random one from pyuser_agent. For example:

    import requests
    from bs4 import BeautifulSoup
    
    URL = f"https://www.edmunds.com/inventory/srp.html?radius=5000&sort=publishDate%3Adesc&pagenumber=2"
    
    
    headers = {"User-Agent": "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:108.0) Gecko/20100101 Firefox/108.0"}
    
    
    response = requests.get(url=URL, headers=headers)
    soup = BeautifulSoup(response.text, "lxml")
    overview = soup.find()
    print(overview)
    

    The possible explanation could be that server keeps a list of real-world User Agents and don’t serve any page to some non-existent ones.

    Login or Signup to reply.
  2. I’m pretty bad at figuring out the right set of headers and cookies, so in these situations, I often end up resorting to:

    • either cloudscraper

      response = cloudscraper.create_scraper().get(URL)
      

    • or HTMLSession – which is particularly nifty in that it also parses the HTML and has some JavaScript support as well

      response = HTMLSession().get(URL)
      
    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search