
BRIGHTDATA n8n INTEGRATION: AUTOMATE BRIGHTDATA WITH N8N
BRIGHTDATA N8N INTEGRATION: AUTOMATE BRIGHTDATA WITH N8N
Need help automating Brightdata with n8n?
Our team will get back to you in minutes.
Why automate Brightdata with n8n?
The BrightData n8n integration gives you access to 15 actions spanning three core resources: Web Unlocker, Web Scraper, and Marketplace Datasets. This means you can build complete data collection pipelines—from unlocking blocked websites to extracting structured data and delivering it directly to your storage systems.
Time savings are substantial. Instead of manually configuring scraping jobs, monitoring their progress, and downloading results, you set up automated workflows once. n8n handles the orchestration: triggering extractions on schedule, checking job status, downloading snapshots when ready, and routing data to your CRM, database, or analytics platform. Teams report cutting data collection time by 70-80% after implementing these automations.
The real power comes from integration. Connect BrightData to over 400+ applications in n8n to build workflows like: automatically scrape competitor pricing daily and update your Google Sheets → send Slack alerts when prices change → sync new data to Airtable for your sales team. Or: extract LinkedIn company data via Web Scraper → enrich leads in HubSpot → trigger personalized email sequences in your outreach tool. The combinations are endless, and every workflow runs hands-off once configured.
How to connect Brightdata to n8n?
! 1 stepHow to connect Brightdata to n8n?
- 01
Add the node
Search and add the node in your workflow.
TIP💡 TIP: Create separate API keys for different environments (development, staging, production). This way, if you need to revoke access or rotate keys, you won't disrupt all your workflows at once. Also, store your API key in n8n's credential manager rather than hardcoding it—this keeps your workflows portable and secure.- 01
Need help automating Brightdata with n8n?
Our team will get back to you in minutes.
Brightdata actions available in n8n
01 Action 01Access and extract data...
This action leverages BrightData's Web Unlocker resource to access and extract data from websites that might otherwise block automated requests. It's your gateway to scraping content from sites with anti-bot protection, CAPTCHAs, or geographic restrictions.
Key parameters: Zone specifies which BrightData proxy zone to route your request through (e.g., 'web_unlocker1'). This is required and determines your proxy pool configuration. Country defines the country code for your request origin (e.g., 'us', 'uk', 'de'). Required—essential for geo-targeted content. Method is the HTTP method for your request. Defaults to 'GET' but supports POST and others. Required. URL is the target URL you want to access. Required text field. Format is the output format for the response data. 'Raw' returns the unprocessed HTML/content. Required.
Use cases: Access region-locked content by routing requests through specific countries. Scrape e-commerce sites with aggressive bot protection. Extract data from websites requiring residential IP addresses.

02 Action 02Get filtered snapshots
This action retrieves snapshots from your Web Scraper datasets with built-in filtering and pagination. Perfect for accessing historical scraping results or pulling specific data subsets from large collections.
Key parameters: Dataset selects which dataset to pull snapshots from. Required dropdown selection. Status displays the current state of available snapshots (e.g., 'Ready'). Skip is the number of items to skip in results. Accepts numeric input, defaults to 0. Optional—useful for pagination. Limit is the maximum records to retrieve. Defaults to 50. Optional. From Date filters snapshots starting from a specific date. Accepts dynamic expressions. Optional.
Use cases: Pull only the most recent scraping results for daily reports. Paginate through large snapshot collections without overloading memory. Filter historical data for specific time periods during analysis.

03 Action 03Initiate batch extraction
Launch large-scale scraping jobs across multiple URLs simultaneously. This action triggers BrightData's collection process, perfect for situations where you need to scrape hundreds or thousands of pages efficiently.
Key parameters: Dataset selects your target dataset configuration. Required dropdown. URLs provides target URLs in JSON array format. Required text field—this is where you specify every page to scrape. Endpoint is the endpoint receiving your scraping requests. Required. Notify is the webhook URL for completion notifications. Required—set this to receive alerts when your batch job finishes.
Use cases: Scrape all product pages from a competitor's catalog overnight. Collect data from a list of LinkedIn company URLs for enrichment. Run weekly extractions across multiple e-commerce sites for price monitoring.

04 Action 04Extract structured data
Scrape and structure data from specific URLs using BrightData's Web Scraper. This action returns clean, formatted data rather than raw HTML—ideal when you need specific data points extracted automatically.
Key parameters: Dataset chooses the scraper configuration to use. Required dropdown. URLs are target URLs in JSON format. Required text input. Include Errors toggles to include error messages in output. Optional on/off switch—enable this during debugging. Format is the output format selection, typically 'JSON'. Required.
Use cases: Extract product names, prices, and descriptions from e-commerce listings. Scrape contact information from business directories. Collect article metadata from news sites for content aggregation.

05 Action 05Download Snapshot
Retrieve the actual data from a completed scraping job. Once your extraction finishes, use this action to download the results in your preferred format and batch size.
Key parameters: Batch Size is the number of records per download request. Required numeric input. Part is the part number for multi-part snapshots. Required integer—start with 1. Snapshot ID is the unique identifier for the snapshot. Required text field. Format chooses between JSON, CSV, and other formats. Required dropdown. Compress toggles to compress downloaded data. Optional—useful for large datasets.
Use cases: Download completed scraping results for processing in downstream nodes. Retrieve data in CSV format for direct import into spreadsheets. Handle large snapshots in parts to avoid memory issues.

06 Action 06Check the status of a browser automation
Monitor the progress of your running scraping jobs. This action queries BrightData to check whether an extraction is still processing, completed, or encountered errors.
Key parameters: Snapshot ID is the identifier of the snapshot to monitor. Optional text field—leave empty to check overall status.
Use cases: Build polling loops that wait for jobs to complete before downloading. Create monitoring dashboards showing active extraction status. Trigger alerts when long-running jobs exceed expected duration.

07 Action 07Get metadata for a selected snapshot
Retrieve detailed information about a specific snapshot from your Marketplace Datasets. This includes creation date, record count, schema information, and processing status.
Key parameters: Snapshot ID is the unique identifier of the target snapshot. Optional text input.
Use cases: Verify snapshot completeness before triggering downstream processing. Log metadata for audit trails and data lineage tracking. Check record counts to validate extraction quality.

08 Action 08Deliver a snapshot to storage
Automatically deliver your scraped data directly to Amazon S3 or other cloud storage destinations. This eliminates manual download-and-upload steps from your data pipeline.
Key parameters: Snapshot ID is the identifier of the snapshot to deliver. Required text field. Deliver Type is the destination selection (e.g., 'Amazon S3'). Required dropdown. Filename Template is the custom naming pattern for output files. Optional text with expression support. File Extension is the output format like 'JSON' or 'CSV'. Required. Bucket is the S3 bucket name for storage. Optional for S3 delivery. Notify is the webhook for delivery completion notifications. Optional.
Use cases: Automatically archive scraped data to S3 for data lake ingestion. Deliver daily extraction results directly to your data warehouse staging area. Set up automated backup pipelines for critical datasets.

09 Action 09List all your snapshot ids
Retrieve a complete list of snapshot identifiers from a specified dataset. Essential for building workflows that need to process or reference multiple snapshots programmatically.
Key parameters: Dataset selects the target dataset. Required dropdown. View specifies a particular view if needed. Optional text field. Status filters by snapshot status (e.g., 'Ready'). Displays current state.
Use cases: Build cleanup workflows that archive old snapshots. Generate reports listing all available data extractions. Create selection interfaces for manual snapshot processing.

10 Action 10Split snapshot data to parts
Break large snapshots into manageable chunks for processing. This action retrieves part information for a snapshot, enabling you to download and process data in segments.
Key parameters: Snapshot ID is the identifier of the snapshot to split. Required text input.
Use cases: Process massive datasets without running out of memory. Enable parallel processing by splitting work across multiple workflow branches. Handle snapshots that exceed single-request size limits.

11 Action 11List available datasets
Retrieve all datasets available in your BrightData Marketplace account. Use this to dynamically populate dataset options or audit your available data sources.
Key parameters: Resource is set to 'Marketplace Dataset'. Required dropdown. Operation is fixed to 'List Datasets'. Required.
Use cases: Build dynamic interfaces that show available datasets. Audit your BrightData account configuration. Create documentation workflows that catalog data sources.

12 Action 12Deliver a snapshot to S3 (with AWS credentials)
Extended version of snapshot delivery with full AWS credential configuration. Use this when you need to deliver to S3 buckets outside your default configuration.
Key parameters: Snapshot ID is the target snapshot identifier. Required. Deliver Type is the destination type, typically 'Amazon S3'. Required. Bucket is the S3 bucket name. Required text input. AWS Access Key is your AWS access key for authentication. Required. AWS Secret Key is your AWS secret key. Required. File Extension is the output format selection. Required. Filename Template is the custom file naming pattern. Optional.
Use cases: Deliver data to client-owned S3 buckets with their credentials. Route snapshots to different AWS accounts based on data type. Configure cross-account data delivery pipelines.

13 Action 13Filter Dataset
Apply filters to marketplace datasets and retrieve matching records. This action enables you to query specific subsets of data without downloading entire datasets.
Key parameters: Dataset is the target dataset to filter. Required dropdown. Records Limit is the maximum records to return. Required numeric input. Filter Type chooses 'Single Filter' or compound options. Required dropdown. Field Name is the field to filter on. Required text input. Operator is the comparison operator (e.g., 'Equals', 'Contains'). Required dropdown. Field Value is the value to match against. Required text input.
Use cases: Extract only records matching specific criteria from large datasets. Build targeted data pulls for specific market segments. Create filtered exports for stakeholder-specific reports.

14 Action 14Retrieve metadata for a dataset
Get detailed information about a specific marketplace dataset, including schema, field definitions, record counts, and update frequency.
Key parameters: Dataset selects the target dataset. Required dropdown with text input support.
Use cases: Understand dataset structure before building extraction workflows. Document available fields for team reference. Validate dataset contents match expected schema.

15 Action 15Retrieve data by snapshot
Download actual content from a specific marketplace snapshot. This action fetches the data records themselves, with options for compression and pagination.
Key parameters: Snapshot ID is the identifier of target snapshot. Required text input. Compress toggles for compressed output. Optional on/off. Batch Size is records per request. Required numeric input. Part is the part number for paginated retrieval. Required numeric. Format is the output format selection (typically 'JSON'). Required dropdown.
Use cases: Download marketplace data for local processing and analysis. Integrate BrightData datasets with your existing data pipelines. Build automated imports from marketplace sources to internal systems.

Build your first workflow with our team
Drop your email and we'll send you the catalog of automations you can ship today.
- Free n8n & Make scenarios to import
- Step-by-step setup docs
- Live cohort + community support
Frequently asked questions
Is the BrightData n8n integration free to use?
The n8n integration itself is free and included with n8n (both self-hosted and cloud versions). However, you'll need an active BrightData subscription to use the integration—BrightData's services are paid based on your usage (bandwidth, successful requests, or dataset access). The integration simply connects n8n to your existing BrightData account, so your costs depend on your BrightData plan and how much data you collect through your automated workflows.What's the difference between Web Scraper and Web Unlocker actions in BrightData n8n?
Web Unlocker and Web Scraper serve different purposes in your data collection pipeline. Web Unlocker handles the access layer—it routes requests through BrightData's proxy network to bypass anti-bot systems, CAPTCHAs, and geo-restrictions, returning raw HTML or page content. Web Scraper works at the extraction layer—it uses pre-configured scrapers to parse websites and return structured data (like product names, prices, descriptions) in clean JSON format. Typically, you'd use Web Unlocker when you need raw page access or have custom parsing logic, and Web Scraper when you want BrightData to handle both access and data structuring automatically.How do I handle large datasets when automating BrightData with n8n?
For large datasets, use the pagination and batching parameters available in most actions. Set appropriate Batch Size values (start with 100-500 records), use the Part parameter to retrieve data in chunks, and leverage the Skip and Limit parameters for controlled pagination. For very large extractions, enable the Compress toggle to reduce data transfer size. Consider using the "Split snapshot data to parts" action first to understand how many parts exist, then loop through them systematically. This approach prevents memory issues and ensures reliable processing of datasets with millions of records.



