GET
/
v1
/
execution
/
{execution_id}
/
results
/
csv

You must pass the execution_id obtained from making an execute query POST request.

Result returns the status, metadata, and query results (in CSV) from a query execution.

  • Read more on Filtering, Sorting, and Sampling to learn more about flexibly grabbing query results.
  • Read more on Pagination to get the most out of the API and handle large results.
  • Results data from an execution are stored with an expiration date of 90 days. This is visible on the API response on the “expires_at” field in the execution status and results json body (not on the CSV endpoint).
  • Dune internally has a maximum query result size limit (which currently is 8GB, but subject to increase in the future). If your query yields more than 8GB of data, the result will be truncated in storage. In such cases, pulling the result data (using pagination) but without specifying allow_partial_results set to true will trigger an error message: “error”: “Partial Result, please request with ‘allows_partial_results=true’”. If you wish to retrieve partial results, you can pass the parameter allow_partial_results=true. But please make sure you indeed want to fetch the truncated result.

Headers

X-Dune-Api-Key
string
required

API Key for the service

Path Parameters

execution_id
string
required

Execution ID

Query Parameters

api_key
string

Alternative to using the X-Dune-Api-Key header

allow_partial_results
boolean

This enables returning a query result that was too large and only a partial result is available. By default allow_partial_results is set to false and a failed state is returned.

columns
string

Specifies a comma-separated list of column names to return. If omitted, all columns are included. Tip: use this to limit the result to specific columns, reducing datapoints cost of the call.

execution_id
string
required
filters
string

Expression to filter out rows from the results to return. This expression is similar to a SQL WHERE clause. More details about it in the Filtering section of the doc. This parameter is incompatible with sample_count.

ignore_max_datapoints_per_request
boolean

There is a default 250,000 datapoints limit to make sure you don't accidentally spend all your credits in one call. To ignore the max limit, you can add ignore_max_datapoints_per_request=true

limit
integer

Limit number of rows to return. This together with 'offset' allows easy pagination through results in an incremental and efficient way. This parameter is incompatible with sampling (sample_count).

offset
integer

Offset row number to start (inclusive, first row means offset=0) returning results from. This together with 'limit' allows easy pagination through results in an incremental and efficient way. This parameter is incompatible with sampling (sample_count).

sample_count
integer

Number of rows to return from the result by sampling the data. This is useful when you want to get a uniform sample instead of the entire result. If the result has less than the sample count, the entire result is returned. Note that this will return a randomized sample, so not every call will return the same result. This parameter is incompatible with offset, limit, and filters parameters.

sort_by
string

Expression to define the order in which the results should be returned. This expression is similar to a SQL ORDER BY clause. More details about it in the Sorting section of the doc.

Response

200 - text/plain

The response is of type string.