在尝试抓取网站时如何更改我的位置?
how can I change my location while trying to scrape the website?
我在一个网站上工作,如果访问者不在土耳其,该网站不会显示任何产品。该网站是 Carrefoursa。当我尝试用我的电脑抓取时,由于我在土耳其的位置,所以没问题。我的服务器位于德国,由于位置原因,蜘蛛无法在服务器上运行。我已经尝试了如下方法:
我尝试用Request发送
class CarrefoursaSpider(scrapy.Spider):
name = 'carrefoursa'
allowed_domains = ['www.carrefoursa.com']
start_urls = ['https://www.carrefoursa.com/meyve/c/1015']
custom_settings = {
"LOG_FILE":"scrapy_logs/"+name+".log",
"ROBOTSTXT_OBEY":False,
"USER_AGENTS":None,
"COOKIES_ENABLED":True,
"COOKIES_DEBUG" : True
}
def parse(self,reponse):
request = scrapy.Request(
reponse.url, callback=self.parse_product,cookies={'Content-Language':'tr','currency': 'TRY', 'country': 'TR','lang': 'tr'}, dont_filter=True)
yield request
def parse_product(self, response):
...
我尝试将网站与另一个国家的 VPN 连接,但出现以下错误。
The requested URL was rejected. Please consult with your administrator.
Your support ID is: ******
除了代理你还有什么建议吗?
我向我的蜘蛛程序添加了一个元标记,它解决了我的问题。
class CarrefoursaSpider(scrapy.Spider):
name = 'carrefoursa'
allowed_domains = ['www.carrefoursa.com']
start_urls = ['https://www.carrefoursa.com/meyve/c/1015']
meta={'proxy': 'xxx.xxx.xxx.xx:xxxx'},
custom_settings = {
"LOG_FILE":"scrapy_logs/"+name+".log",
"ROBOTSTXT_OBEY":False,
"USER_AGENTS":None,
"COOKIES_ENABLED":True,
"COOKIES_DEBUG" : True
}
def parse(self,reponse):
request = scrapy.Request(
reponse.url, callback=self.parse_product,cookies={'Content-Language':'tr','currency': 'TRY', 'country': 'TR','lang': 'tr'}, dont_filter=True)
yield request
def parse_product(self, response):
...
我在一个网站上工作,如果访问者不在土耳其,该网站不会显示任何产品。该网站是 Carrefoursa。当我尝试用我的电脑抓取时,由于我在土耳其的位置,所以没问题。我的服务器位于德国,由于位置原因,蜘蛛无法在服务器上运行。我已经尝试了如下方法:
我尝试用Request发送
class CarrefoursaSpider(scrapy.Spider):
name = 'carrefoursa'
allowed_domains = ['www.carrefoursa.com']
start_urls = ['https://www.carrefoursa.com/meyve/c/1015']
custom_settings = {
"LOG_FILE":"scrapy_logs/"+name+".log",
"ROBOTSTXT_OBEY":False,
"USER_AGENTS":None,
"COOKIES_ENABLED":True,
"COOKIES_DEBUG" : True
}
def parse(self,reponse):
request = scrapy.Request(
reponse.url, callback=self.parse_product,cookies={'Content-Language':'tr','currency': 'TRY', 'country': 'TR','lang': 'tr'}, dont_filter=True)
yield request
def parse_product(self, response):
...
我尝试将网站与另一个国家的 VPN 连接,但出现以下错误。
The requested URL was rejected. Please consult with your administrator.
Your support ID is: ******
除了代理你还有什么建议吗?
我向我的蜘蛛程序添加了一个元标记,它解决了我的问题。
class CarrefoursaSpider(scrapy.Spider):
name = 'carrefoursa'
allowed_domains = ['www.carrefoursa.com']
start_urls = ['https://www.carrefoursa.com/meyve/c/1015']
meta={'proxy': 'xxx.xxx.xxx.xx:xxxx'},
custom_settings = {
"LOG_FILE":"scrapy_logs/"+name+".log",
"ROBOTSTXT_OBEY":False,
"USER_AGENTS":None,
"COOKIES_ENABLED":True,
"COOKIES_DEBUG" : True
}
def parse(self,reponse):
request = scrapy.Request(
reponse.url, callback=self.parse_product,cookies={'Content-Language':'tr','currency': 'TRY', 'country': 'TR','lang': 'tr'}, dont_filter=True)
yield request
def parse_product(self, response):
...