`
It's easy to use our API for data harvesting. Just send the URL you would like to scrape to the API along with your API key and the API will return the HTML response from the URL you want to grab. You can use the API to scrape web pages, json API pages, documents, PDFs, images or other files. Note: there is a 2MB limit per request, if the response page or your requests exceed 2MB, you will get an error message. To unlock the response page limit, please contact us.
We use API keys to authenticate requests. To use the API you need to sign up for an account and include your unique API key in every request. In this documentation we will use 'xxxxxxxxxx' as a dummy API key. When you scrape data with our API, you will need to replace the dummy API key with your own API Key.
Before you proceed go ahead and signup here to get the an API key with 1000 free API credits.
A simple http Get request with our API..
curl "http://api.barkingdata.com/?url=http://ip-api.com/json&api_key=xxxxxxxxxx"
http://api.barkingdata.com is our API end point, what you have to do is simply passing the url and the api key to this end point.
Http Get reqeust with Python:
import requests URL = "http://api.barkingdata.com" url = "http://ip-api.com/json " # defining a params dict for the parameters to be sent to the API PARAMS = {'api_key': api_key, 'url': url, 'http_version' : 'h1'} # sending get request and pirnt the response page r = requests.get(URL, params=PARAMS) print(r.content)
Users can send any custom headers.
import requests URL = "http://api.barkingdata.com" api_key = "xxxxxxxxxx" url = "http://httpbin.org/anything" # defining a params dict for the parameters to be sent to the API PARAMS = {'api_key': api_key, 'url': url} postdata = 'post_param1=value1&post_param2=value2' # sending get request and pirnt the response page r = requests.post(url=URL, params=PARAMS, data=postdata) print(r.content)
Note: postdata is defined as a str type, you can also change postdata to a dict type to scrape forms data, for exmaple:
postdata={'post_param1': 'value1', 'post_param2' : 'value2'} and then send the reqeust again:
r = requests.post(url=URL, params=PARAMS, data=postdata)
Users can send any custom headers.
import requests URL = "http://api.barkingdata.com" api_key = "xxxxxxxxxx" url = "http://httpbin.org/anything" # defining a params dict for the parameters to be sent to the API PARAMS = {'api_key': api_key, 'url': url} headers = { 'My-Custom-Header': 'MY custom header data', 'My-Custom-Header2': 'MY custom header data2', } # sending get request and pirnt the response page r = requests.get(url=URL, params=PARAMS, headers=headers) print(r.content)
Users can send any custom cookies.
import requests URL = "http://api.barkingdata.com" api_key = "xxxxxxxxxx" url = "http://httpbin.org/anything" # defining a params dict for the parameters to be sent to the API PARAMS = {'api_key': api_key, 'url': url} mycookies = { 'My-Custom-Cookie': 'MY custom cookie data', 'My-Custom-Cookie2': 'MY custom cookie data2', } # sending get request and pirnt the response page r = requests.get(url=URL, params=PARAMS, cookies=mycookies) print(r.content)
Over 150 countreis are supported. In the url , just pass the 2 letter ISO standard country code , for exmaple to use UK residential proxy, use these parameters: country=gb&premium=true (please note that you must specify premium=true for using residential proxies)
We don't do JS Rendering because using webdrivers/selenium/puppeteer/playwright/nodejs to load and render a page has four major serious drawbacks: 1.It generates a lot of reqeusts behind the scene, and can easily birng burdens to the target website 2.It's sluggish and slow 3.It eats a lot of cpu and memories when rendering the page. 4.It can cost you a lot more money compared the non-rendering approach.
If you think you need to scrape the data with JS Rendering, please contact us, 99.99% of chance, we can help you convert it to a much simpler crawler without using JS Rendering and also you can easily save by 30-80%!
Indestry's Lowest Priced Google SERP API Service, Scrape Google SERP Anonymously and Consistently
Web Scrape Google Flights Data to Get Real Time Airline TIcket Pricings and Flights Schedules
Web Cralwer to Extract Product and Category Data from Top Fashion Website Nordstrom.com
Web cralwers to harvest food delivery data from Ubereats, doordsash, grubhub ...
Web Crawlwers to scrape homedepot.com for product listings and product details data
Web crawlers to scrape China hotels data from top hotel websites such as holidayInn, Ctrip etc.
Grab Holdings Inc., commonly known as Grab, is a Southeast Asian technology company headquartered in Singapore and Indonesia. In addition to transportation, the company offers food delivery and digital payments services via a mobile app. Grab currently operates in Singapore, Malaysia, Cambodia, Indo
Collect millions of realestate data from Thailand major realEstate website ddproperty.com
Web crawlers to scrape lazada for product listings data and category data
Web Crawlers to Scrape Global Interste Rate, Mortgage Rate, Deposit Rate
One of the industry's best Web Crawlers(Service) for China Major Ecommerce Websites such as Tmall, JD, Kaola, PinDuoDuo etc.