` Smart Proxy Manager API Documentation-A Leading Web Mining Provider-Google Flight API, Scrape Nordstrom, Scrape Walmart, Scrape Lowes, Scrape Homedepot
Smart Proxy Manager API Documentation

Smart Proxy Manager API Documentation

We are rebuilding the next generation Smart Proxy API and it will be live soon!

Getting Started

It's easy to use our API for data harvesting. Just send the URL you would like to scrape to the API along with your API key and the API will return the HTML response from the URL you want to grab. You can use the API to scrape web pages, json API pages, documents, PDFs, images or other files. Note: there is a 2MB limit per request, if the response page or your requests exceed 2MB, you will get an error message. To unlock the response page limit, please contact us.

API Key & Authentication

We use API keys to authenticate requests. To use the API you need to sign up for an account and include your unique API key in every request. In this documentation we will use 'xxxxxxxxxx' as a dummy API key. When you scrape data with our API, you will need to replace the dummy API key with your own API Key.

Before you proceed go ahead and signup here  to get the an API key with 1000 free API credits. 

Simple Get Request

A simple http Get request with our API..

curl "http://api.barkingdata.com/?url=http://ip-api.com/json&api_key=xxxxxxxxxx"


http://api.barkingdata.com is our API end point, what you have to do is simply passing the url and the api key to this end point. 

Http Get reqeust with Python:

 
import requests 
  
URL = "http://api.barkingdata.com"
  
url = "http://ip-api.com/json "
# defining a params dict for the parameters to be sent to the API
PARAMS = {'api_key': api_key, 'url': url, 'http_version' : 'h1'}

# sending get request and pirnt the response page
r = requests.get(URL, params=PARAMS)
print(r.content)

Http 2.0 Request

Users can send any custom headers.

Simple Post Request


import requests 
  
URL = "http://api.barkingdata.com"
  
api_key = "xxxxxxxxxx"
url = "http://httpbin.org/anything"
# defining a params dict for the parameters to be sent to the API
PARAMS = {'api_key': api_key, 'url': url}
postdata = 'post_param1=value1&post_param2=value2'

# sending get request and pirnt the response page
r = requests.post(url=URL, params=PARAMS, data=postdata)
print(r.content)

Note: postdata is defined as a str type, you can also change postdata to a dict type to scrape forms data, for exmaple:

postdata={'post_param1': 'value1', 'post_param2' : 'value2'} and then send the reqeust again:

r = requests.post(url=URL, params=PARAMS, data=postdata)

Request Headers

Users can send any custom headers.

import requests 
  
URL = "http://api.barkingdata.com"
api_key = "xxxxxxxxxx"
url = "http://httpbin.org/anything"
# defining a params dict for the parameters to be sent to the API
PARAMS = {'api_key': api_key, 'url': url}

headers = {
    'My-Custom-Header': 'MY custom header data',
    'My-Custom-Header2': 'MY custom header data2',
}
# sending get request and pirnt the response page
r = requests.get(url=URL, params=PARAMS, headers=headers)
print(r.content)

Custom Cookies

Users can send any custom cookies.

import requests 
  
URL = "http://api.barkingdata.com"
api_key = "xxxxxxxxxx"
url = "http://httpbin.org/anything"
# defining a params dict for the parameters to be sent to the API
PARAMS = {'api_key': api_key, 'url': url}

mycookies = {
    'My-Custom-Cookie': 'MY custom cookie data',
    'My-Custom-Cookie2': 'MY custom cookie data2',
}
# sending get request and pirnt the response page
r = requests.get(url=URL, params=PARAMS, cookies=mycookies)
print(r.content)


GeoTargeting: Industry's Lowest Cost Residential Proxy

Over 150 countreis are supported. In the url , just pass the 2 letter ISO standard country code , for exmaple to use UK residential proxy, use these parameters: country=gb&premium=true  (please note that you must specify premium=true for using residential proxies)


JS Rendering?

We don't do JS Rendering because using webdrivers/selenium/puppeteer/playwright/nodejs to load and render a page has four major serious drawbacks: 1.It generates a lot of reqeusts behind the scene, and can easily birng burdens to the target website 2.It's sluggish and slow 3.It eats a lot of cpu and memories when rendering the page. 4.It can cost you a lot more money compared the non-rendering approach.

If you think you need to scrape the data with JS Rendering, please contact us, 99.99% of chance, we can help you convert it to a much simpler crawler without using JS Rendering and also you can easily save by 30-80%!

You May Also Be Interested In




2021-04-09 17:50:16

Indestry's Lowest Priced Google SERP API Service, Scrape Google SERP Anonymously and Consistently

2021-01-04 22:40:13

Web Scrape Google Flights Data to Get Real Time Airline TIcket Pricings and Flights Schedules

2021-01-04 22:40:13

Web Cralwer to Extract Product and Category Data from Top Fashion Website Nordstrom.com

2022-04-12 13:01:51

Web cralwers to harvest food delivery data from Ubereats, doordsash, grubhub ...

2022-04-08 20:27:39

Web Crawlwers to scrape homedepot.com for product listings and product details data

2022-04-08 20:26:57

Scrape realtime hotels data from Cosmopolitan Las Vegas hotels

2022-04-08 20:26:02

Web crawlers to scrape China hotels data from top hotel websites such as holidayInn, Ctrip etc.

2022-04-08 20:23:51

Grab Holdings Inc., commonly known as Grab, is a Southeast Asian technology company headquartered in Singapore and Indonesia. In addition to transportation, the company offers food delivery and digital payments services via a mobile app. Grab currently operates in Singapore, Malaysia, Cambodia, Indo

2022-04-08 20:23:23

Collect millions of realestate data from Thailand major realEstate website ddproperty.com

2022-04-08 20:23:00

Web crawlers to scrape lazada for product listings data and category data

2022-04-08 20:22:15

Web Scraping product and category data from Fashionphile.com

2022-04-08 20:19:42

Web Crawlers to Scrape Millions of products from Lowes.com

2022-04-08 20:18:38

Web Crawlers to Scrape Global Interste Rate, Mortgage Rate, Deposit Rate

2021-01-24 22:29:43

One of the industry's best Web Crawlers(Service) for China Major Ecommerce Websites such as Tmall, JD, Kaola, PinDuoDuo etc.




Pricing