Copy-pasteable proxy configuration for the seven most common tools — cURL, Python requests and aiohttp, Node.js fetch and axios, Scrapy, Playwright, Puppeteer, and browser settings — with authentication, HTTPS, and SOCKS5 variants.
The Anatomy of a Proxy URL
protocol://[user:password@]host:port. Break it into pieces:- protocol —
http,https, orsocks5. For free proxies, this is almost alwayshttp; SOCKS5 support is rare on public lists. - user:password — optional. Free proxies don't need authentication; paid proxies usually do.
- host — the proxy's IP address or hostname.
- port — the proxy's TCP port.
A free HTTP proxy URL looks like
http://203.0.113.5:8080. A paid SOCKS5 proxy with auth looks like socks5://user:[email protected]:8888. Most tools accept either form in the same configuration slot.One piece of terminology that causes confusion: the
https scheme on the proxy URL refers to the connection between you and the proxy being encrypted, not to the destination being HTTPS. HTTPS-to-proxy connections are rare for free proxies; the usual setup is an http scheme on the proxy URL even when the destination is an HTTPS site. The proxy will handle the CONNECT tunnel for HTTPS destinations transparently.cURL
-x (or --proxy).# Plain HTTP target
curl -x http://203.0.113.5:8080 http://httpbin.org/ip
# HTTPS target
curl -x http://203.0.113.5:8080 https://httpbin.org/ip
# SOCKS5 proxy
curl --socks5 203.0.113.5:1080 https://httpbin.org/ip
# SOCKS5 with remote DNS resolution (avoids DNS leak)
curl --socks5-hostname 203.0.113.5:1080 https://httpbin.org/ip
# Authenticated proxy
curl -x http://user:[email protected]:8080 https://httpbin.org/ipFor free proxies that require disabling SSL verification (loose-SSL proxies), add
-k or --insecure. Only use this for throwaway traffic — it disables the certificate chain validation that keeps man-in-the-middle attacks from working.curl -k -x http://203.0.113.5:8080 https://httpbin.org/ipYou can also set the
http_proxy, https_proxy, and all_proxy environment variables to apply a proxy to every cURL invocation in the shell session:export http_proxy=http://203.0.113.5:8080
export https_proxy=http://203.0.113.5:8080
curl https://httpbin.org/ipPython: requests
requests library accepts a proxies dictionary that maps destination protocol to proxy URL:import requests
proxies = {
'http': 'http://203.0.113.5:8080',
'https': 'http://203.0.113.5:8080',
}
response = requests.get('https://httpbin.org/ip', proxies=proxies)
print(response.json())Keys are the destination scheme, not the proxy scheme. In almost every case you'll use the same HTTP proxy URL for both
http and https keys, because the same HTTP proxy handles both via CONNECT tunneling.For SOCKS5, install the
requests[socks] extra (which pulls in the PySocks package):pip install 'requests[socks]'proxies = {
'http': 'socks5://203.0.113.5:1080',
'https': 'socks5://203.0.113.5:1080',
}
# Use socks5h:// to resolve DNS through the proxy (prevents DNS leaks)
proxies_with_remote_dns = {
'http': 'socks5h://203.0.113.5:1080',
'https': 'socks5h://203.0.113.5:1080',
}Loose-SSL free proxies often require
verify=False, which also triggers a warning. Suppress the warning and the verification in one block:import requests
import urllib3
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
response = requests.get(
'https://httpbin.org/ip',
proxies={'http': 'http://203.0.113.5:8080', 'https': 'http://203.0.113.5:8080'},
verify=False,
timeout=10,
)Python: aiohttp (async)
aiohttp is the standard. Proxy configuration is per-request or per-session:import asyncio
import aiohttp
async def fetch():
async with aiohttp.ClientSession() as session:
async with session.get(
'https://httpbin.org/ip',
proxy='http://203.0.113.5:8080',
timeout=aiohttp.ClientTimeout(total=15),
) as response:
return await response.json()
print(asyncio.run(fetch()))For SOCKS5 with aiohttp, install
aiohttp_socks:pip install aiohttp_socksfrom aiohttp_socks import ProxyConnector
connector = ProxyConnector.from_url('socks5://203.0.113.5:1080')
async with aiohttp.ClientSession(connector=connector) as session:
async with session.get('https://httpbin.org/ip') as response:
print(await response.json())For loose-SSL free proxies pass
ssl=False to the request (disables certificate verification for that request only — safer than globally disabling).Node.js: fetch, axios, and undici
fetch that accepts a dispatcher for proxy routing via undici:import { ProxyAgent } from 'undici';
const agent = new ProxyAgent('http://203.0.113.5:8080');
const response = await fetch('https://httpbin.org/ip', { dispatcher: agent });
console.log(await response.json());For axios, use
https-proxy-agent or http-proxy-agent:import axios from 'axios';
import { HttpsProxyAgent } from 'https-proxy-agent';
const agent = new HttpsProxyAgent('http://203.0.113.5:8080');
const response = await axios.get('https://httpbin.org/ip', {
httpsAgent: agent,
httpAgent: agent,
});
console.log(response.data);For SOCKS5, use
socks-proxy-agent:import { SocksProxyAgent } from 'socks-proxy-agent';
const agent = new SocksProxyAgent('socks5://203.0.113.5:1080');
const response = await fetch('https://httpbin.org/ip', { dispatcher: agent });Authenticated proxies work by embedding credentials in the URL:
http://user:pass@host:port.Scrapy
HttpProxyMiddleware, which is enabled by default. The simplest way to set a proxy for a spider is through the meta dictionary on each request:import scrapy
class ProxySpider(scrapy.Spider):
name = 'proxy_test'
def start_requests(self):
yield scrapy.Request(
'https://httpbin.org/ip',
meta={'proxy': 'http://203.0.113.5:8080'},
)
def parse(self, response):
self.log(response.text)For rotation across a list of proxies, use a custom downloader middleware:
# middlewares.py
import random
class ProxyRotationMiddleware:
def __init__(self, proxies):
self.proxies = proxies
@classmethod
def from_crawler(cls, crawler):
return cls(crawler.settings.getlist('ROTATING_PROXY_LIST'))
def process_request(self, request, spider):
request.meta['proxy'] = random.choice(self.proxies)Enable it in
settings.py and load the proxy list:DOWNLOADER_MIDDLEWARES = {
'myproject.middlewares.ProxyRotationMiddleware': 100,
}
ROTATING_PROXY_LIST = [
'http://203.0.113.5:8080',
'http://203.0.113.6:8080',
# ...
]For production-grade rotation with auto-retry on failures, the
scrapy-rotating-proxies package handles banned-proxy detection automatically.Playwright
from playwright.async_api import async_playwright
async def main():
async with async_playwright() as p:
browser = await p.chromium.launch(
proxy={
'server': 'http://203.0.113.5:8080',
# 'username': 'user', # if authenticated
# 'password': 'pass',
},
)
page = await browser.new_page()
await page.goto('https://httpbin.org/ip')
print(await page.content())
await browser.close()
import asyncio
asyncio.run(main())Node.js version:
import { chromium } from 'playwright';
const browser = await chromium.launch({
proxy: { server: 'http://203.0.113.5:8080' },
});
const page = await browser.newPage();
await page.goto('https://httpbin.org/ip');
await browser.close();For per-context proxy configuration (useful when rotating IPs across contexts):
const context = await browser.newContext({
proxy: { server: 'http://203.0.113.5:8080' },
});Playwright supports SOCKS5 by passing
socks5://host:port as the server value. It also supports per-context proxies with different credentials, which is how you'd rotate between paid residential proxies with session-based targeting.Puppeteer
import puppeteer from 'puppeteer';
const browser = await puppeteer.launch({
args: ['--proxy-server=http://203.0.113.5:8080'],
});
const page = await browser.newPage();
// For authenticated proxies, provide credentials at the page level
// await page.authenticate({ username: 'user', password: 'pass' });
await page.goto('https://httpbin.org/ip');
console.log(await page.content());
await browser.close();For SOCKS5:
args: ['--proxy-server=socks5://203.0.113.5:1080']Puppeteer doesn't have native per-context proxy support like Playwright, so rotating IPs requires launching separate browser instances. For high-volume rotation, use
puppeteer-extra-plugin-proxy-router or drop down to Playwright.For loose-SSL proxies, Chromium launched by Puppeteer ignores certificate errors by default when
--ignore-certificate-errors is passed:args: [
'--proxy-server=http://203.0.113.5:8080',
'--ignore-certificate-errors',
]Browser Configuration (Chrome, Firefox, Edge)
Chrome / Edge (Windows, macOS, Linux). Chrome defers to the OS proxy settings by default. Set the proxy at the OS level (System Preferences > Network on macOS; Settings > Network & Internet > Proxy on Windows; depends on distribution on Linux) and Chrome picks it up.
To use a per-launch proxy without touching system settings, launch Chrome from the command line:
# macOS
open -na 'Google Chrome' --args --proxy-server='http://203.0.113.5:8080' --user-data-dir=/tmp/chrome-proxy
# Windows
chrome.exe --proxy-server='http://203.0.113.5:8080' --user-data-dir='%TEMP%\chrome-proxy'
# Linux
google-chrome --proxy-server='http://203.0.113.5:8080' --user-data-dir=/tmp/chrome-proxyThe
--user-data-dir flag isolates this session from your normal Chrome profile — always use it when running free proxies.Firefox. Firefox has its own proxy settings independent of the OS: Settings > Network Settings > Manual proxy configuration. Enter the HTTP proxy address and port. Check the 'Also use this proxy for HTTPS' box.
For DNS over SOCKS (the
network.proxy.socks_remote_dns setting): type about:config in the URL bar, search for socks_remote_dns, set to true. This routes DNS queries through the SOCKS5 proxy and prevents DNS leaks.For production use, add the
FoxyProxy or Proxy SwitchyOmega extension — both allow rule-based proxy switching, so different domains can route through different proxies without reconfiguring each time.Using Databay's Free Proxy List Programmatically
https://databay.com/api/v1/proxy-list with filter parameters for protocol, country, anonymity, SSL, Google compatibility, and speed. Pull a fresh batch and feed it into any of the tools above:import requests
import random
# Pull 100 fast, elite-anonymity proxies
resp = requests.get(
'https://databay.com/api/v1/proxy-list',
params={'anonymity': 'elite', 'speed': 'fast', 'limit': 100, 'format': 'json'},
timeout=10,
)
proxies_raw = resp.json()['data']
proxy_urls = [f"http://{p['ip']}:{p['port']}" for p in proxies_raw]
# Rotate through them
for url in ['https://example.com/page-1', 'https://example.com/page-2']:
proxy = random.choice(proxy_urls)
response = requests.get(url, proxies={'http': proxy, 'https': proxy}, timeout=15)
print(response.status_code, proxy)The API is rate-limited to 50 requests per second with 10-second response caching, which is plenty for pulling a fresh list every few minutes. CSV and plain-text output formats are available via
?format=csv and ?format=txt for shell scripts and Docker builds. The list auto-verifies every 10 minutes — entries that fail uptime checks are removed on the next cycle, so you'll always get working addresses.Troubleshooting Common Proxy Errors
Connection refused / timeout. The proxy is dead. Skip it and try another. Free proxies fail constantly; any retry logic should switch proxies on connection errors.
407 Proxy Authentication Required. The proxy expects credentials you didn't provide. Either it's not actually a free proxy, or it's the wrong proxy entirely. Remove it from your list.
403 Forbidden from the destination. The destination site has detected or blocked the proxy. Try a different proxy. If many proxies are getting blocked, the target has serious anti-bot defenses and you need residential or mobile proxies instead of datacenter ones.
SSL certificate errors. The proxy is man-in-the-middling HTTPS with its own cert. For throwaway traffic, disable verification (
verify=False, -k, --insecure). For anything sensitive, stop using that proxy immediately — it can read your supposedly-encrypted traffic.Tunnel connection failed (status 502). The proxy couldn't reach the destination. Usually means the destination is filtering proxy IPs, or the proxy itself is network-constrained. Retry with a different proxy.
Garbled or injected responses. The proxy is actively modifying traffic — ad injection, cryptominer scripts, phishing redirects. Stop using the proxy, rotate any credentials you may have sent through it, and treat the session as compromised.
For production scraping through free proxies, build retry logic that treats any error as a signal to rotate: try each URL against a fresh proxy up to N times. Combined with aggressive timeout (5-10 seconds), this makes flaky free proxies usable despite their individual unreliability.