Our API supports pagination on all /results
endpoints to facilitate efficient data retrieval by dividing large datasets into smaller, manageable chunks. This feature helps prevent overload, ensures smoother performance, and enhances the user experience by making it easier to navigate through data, thus avoiding limit errors. Pagination is available for the following endpoints:
Pagination can be effectively combined with filtering and sorting to optimize data fetching.
To paginate through results:
limit
parameter to set the maximum number of results per request.offset
parameter defines the starting point for the data retrieval, with a default value of 0 (the first row).next_offset
and next_uri
fields in the response body indicate how to fetch the next page. For CSV responses, look for the X-Dune-Next-Offset
and X-Dune-Next-Uri
headers. The server may adjust the provided limit if deemed too large, ensuring efficient data handling. Follow these indicators to navigate through the dataset seamlessly.limit
(required)integer
offset
integer
limit
to navigate through results in an efficient, incremental manner.The following fields in the response body are related to pagination and can be utilized when doing paginated get results request. If they are available, you can use them to paginate the next page. If they are not available, that means there are no more results to be fetched.
next_offset
next_uri
next_offset
next_uri
x-dune-next-offset
x-dune-next-uri
If you pass in an invalid offset
parameter value, you will get an empty result set. For example, if there are only 25 rows of result data, and you pass in offset=30
, you will not receive an error, but rather an empty result with metadata like this. Note the response field result.total_row_count
, indicating this result has only 25 rows.
Example empty response
Example paginated response
Data Returned Limit
When using pagination, our intention is to use sizes that work well on mobile, with lower data and ram consumption. For this, and to avoid more work on the developer, when the client specifies a very large limit value (for example 500000 rows), instead of returning an error, the server will override this limit to a lower, safer value (for example 30000 rows) and will always provide the correct next offset
and limit
value to use on the next paginated requests. The exact maximum limit value is subject to change.
Data Size Limit
Dune internally has a maximum query result size limit (which currently is 8GB, but subject to increase in the future). If your query yields more than 8GB of data, the result will be truncated in storage. In such cases, pulling the result data (using pagination) but without specifying allow_partial_results
set to true will trigger an error message: “error”: “Partial Result, please request with ‘allows_partial_results=true’”. If you wish to retrieve partial results, you can pass the parameter allow_partial_results=true
. But please make sure you indeed want to fetch the truncated result.
So what? Related to pagination, this means that
limit
and offset
parameters in order to read the partial result (the first 8GB of data), set allow_partial_results=true
result.result_set_size
Our API supports pagination on all /results
endpoints to facilitate efficient data retrieval by dividing large datasets into smaller, manageable chunks. This feature helps prevent overload, ensures smoother performance, and enhances the user experience by making it easier to navigate through data, thus avoiding limit errors. Pagination is available for the following endpoints:
Pagination can be effectively combined with filtering and sorting to optimize data fetching.
To paginate through results:
limit
parameter to set the maximum number of results per request.offset
parameter defines the starting point for the data retrieval, with a default value of 0 (the first row).next_offset
and next_uri
fields in the response body indicate how to fetch the next page. For CSV responses, look for the X-Dune-Next-Offset
and X-Dune-Next-Uri
headers. The server may adjust the provided limit if deemed too large, ensuring efficient data handling. Follow these indicators to navigate through the dataset seamlessly.limit
(required)integer
offset
integer
limit
to navigate through results in an efficient, incremental manner.The following fields in the response body are related to pagination and can be utilized when doing paginated get results request. If they are available, you can use them to paginate the next page. If they are not available, that means there are no more results to be fetched.
next_offset
next_uri
next_offset
next_uri
x-dune-next-offset
x-dune-next-uri
If you pass in an invalid offset
parameter value, you will get an empty result set. For example, if there are only 25 rows of result data, and you pass in offset=30
, you will not receive an error, but rather an empty result with metadata like this. Note the response field result.total_row_count
, indicating this result has only 25 rows.
Example empty response
Example paginated response
Data Returned Limit
When using pagination, our intention is to use sizes that work well on mobile, with lower data and ram consumption. For this, and to avoid more work on the developer, when the client specifies a very large limit value (for example 500000 rows), instead of returning an error, the server will override this limit to a lower, safer value (for example 30000 rows) and will always provide the correct next offset
and limit
value to use on the next paginated requests. The exact maximum limit value is subject to change.
Data Size Limit
Dune internally has a maximum query result size limit (which currently is 8GB, but subject to increase in the future). If your query yields more than 8GB of data, the result will be truncated in storage. In such cases, pulling the result data (using pagination) but without specifying allow_partial_results
set to true will trigger an error message: “error”: “Partial Result, please request with ‘allows_partial_results=true’”. If you wish to retrieve partial results, you can pass the parameter allow_partial_results=true
. But please make sure you indeed want to fetch the truncated result.
So what? Related to pagination, this means that
limit
and offset
parameters in order to read the partial result (the first 8GB of data), set allow_partial_results=true
result.result_set_size