Get Execution Result
Given an execution ID, returns result of a an execution request
You must pass the execution_id
obtained from making an execute query POST request.
Result returns the status, metadata, and query results (in JSON) from a query execution.
- Read more on Filtering, Sorting, and Sampling to learn more about flexibly grabbing query results.
- Read more on Pagination to get the most out of the API and handle large results.
- Results data from an execution are stored for 90 days. This is visible on the API response on the “expires_at” field in the execution status and results body.
- Dune internally has a maximum query result size limit (which currently is 8GB, but subject to increase in the future). If your query yields more than 8GB of data, the result will be truncated in storage. In such cases, pulling the result data (using pagination) but without specifying
allow_partial_results
set to true will trigger an error message: “error”: “Partial Result, please request with ‘allows_partial_results=true’“. If you wish to retrieve partial results, you can pass the parameterallow_partial_results=true
. But please make sure you indeed want to fetch the truncated result.
Headers
API Key for the service
Path Parameters
Execution ID
Query Parameters
Alternative to using the X-Dune-Api-Key header
This enables returning a query result that was too large and only a partial result is available. By default allow_partial_results is set to false and a failed state is returned.
Specifies a comma-separated list of column names to return. If omitted, all columns are included. Tip: use this to limit the result to specific columns, reducing datapoints cost of the call.
Expression to filter out rows from the results to return. This expression is similar to a SQL WHERE clause. More details about it in the Filtering section of the doc. This parameter is incompatible with sample_count.
There is a default 250,000 datapoints limit to make sure you don't accidentally spend all your credits in one call. To ignore the max limit, you can add ignore_max_datapoints_per_request=true
Limit number of rows to return. This together with 'offset' allows easy pagination through results in an incremental and efficient way. This parameter is incompatible with sampling (sample_count).
Offset row number to start (inclusive, first row means offset=0) returning results from. This together with 'limit' allows easy pagination through results in an incremental and efficient way. This parameter is incompatible with sampling (sample_count).
Number of rows to return from the result by sampling the data. This is useful when you
want to get a uniform sample instead of the entire result. If the result has less
than the sample count, the entire result is returned. Note that this will return a
randomized sample, so not every call will return the same result. This parameter is
incompatible with offset
, limit
, and filters
parameters.
Expression to define the order in which the results should be returned. This expression is similar to a SQL ORDER BY clause. More details about it in the Sorting section of the doc.
Response
Timestamp of when the query execution was cancelled, if applicable.
Timestamp of when the query execution ended.
Unique identifier for the execution of the query.
Timestamp of when the query execution started.
Timestamp of when the query result expires.
Whether the state of the query execution is terminal. This can be used for polling purposes.
Offset that can be used to retrieve the next page of results.
URI that can be used to fetch the next page of results.
Unique identifier of the query.
The object containing the results and metadata of the query execution
The state of the query execution.
Timestamp of when the query was submitted.
Was this page helpful?