Skip to main content
The pipeline endpoints enable you to execute and monitor query pipelines - a powerful feature that allows you to chain multiple queries together and execute them as a coordinated workflow.

What are Query Pipelines?

Query pipelines are sequences of related queries that execute in a specific order based on their dependencies. Instead of executing individual queries separately, pipelines allow you to:
  • Define Dependencies: Specify which queries depend on the results of other queries
  • Atomic Execution: Execute all queries in the pipeline as a single unit
  • Coordinated Updates: Ensure related queries are always executed together
  • Track Progress: Monitor the status of each query in the pipeline

How Pipelines Work

  1. Inspect Pipeline: Use the Get Query Pipeline endpoint to view the pipeline structure
  2. Execute Pipeline: Use the Execute Pipeline endpoint or Execute Query Pipeline endpoint
  3. Monitor Progress: Poll the Get Pipeline Execution Status endpoint to track execution
  4. Retrieve Results: Once complete, access results for each query using their individual execution IDs

Pipeline Components

A pipeline execution consists of multiple nodes, where each node can be:
  • Query Execution: A DuneSQL query that runs as part of the pipeline
  • Materialized View Refresh: A materialized view that gets refreshed during pipeline execution
Each node has its own execution status and can be tracked independently.

Common Use Cases

Data Transformation Pipelines

Build multi-stage data processing workflows where each stage depends on the previous one:
  • Stage 1: Extract raw data
  • Stage 2: Transform and clean data
  • Stage 3: Aggregate results
  • Stage 4: Generate final metrics

Coordinated Dashboard Updates

Ensure all queries powering a dashboard are executed together with fresh data:
  • Execute all dependent queries in the correct order
  • Maintain consistency across related metrics
  • Update materialized views that feed into dashboard queries

Incremental Data Processing

Process data incrementally through multiple dependent steps:
  • Process new transactions
  • Update aggregated tables
  • Refresh summary statistics
  • Update trend indicators

Endpoints

EndpointMethodDescription
Execute PipelinePOST /v1/pipelines/executeBuilds and executes a pipeline including all materialized views in the query’s lineage
Get Pipeline Execution StatusGET /v1/pipelines/executions/{pipeline_execution_id}/statusRetrieves the status of a pipeline execution including all nodes
EndpointMethodDescription
Get Query PipelineGET /v1/query/{query_id}/pipelineRetrieves the pipeline definition for a query, including all dependencies and execution order
Execute Query PipelinePOST /v1/query/{query_id}/pipeline/executeExecutes a predefined query pipeline starting from the specified query

Pipeline vs. Regular Query Execution

FeatureRegular ExecutionPipeline Execution
QueriesSingle queryMultiple dependent queries
CoordinationManualAutomatic
DependenciesNoneEnforced by pipeline
MonitoringPer-queryUnified view of all nodes
Use CaseIndependent analysisComplex workflows
Use Execute Pipeline for automatic lineage-based execution or Execute Query Pipeline for predefined pipelines. Then monitor progress with the Get Pipeline Execution Status endpoint using the returned pipeline_execution_id.

Credits and Performance

Pipeline executions consume credits based on:
  • The performance tier selected (medium or large)
  • Actual compute resources used by each query in the pipeline
  • Number of queries and complexity of operations
You can specify the performance tier when executing the pipeline to optimize for speed or cost.
Pipelines are created and configured in the Dune web application. The API endpoints allow you to execute and monitor existing pipelines programmatically.