dune = DuneClient( api_key='<paste your api key here>', base_url="https://api.dune.com", request_timeout=300 # request will time out after 300 seconds)
Copy
Ask AI
dune = DuneClient( api_key='<paste your api key here>', base_url="https://api.dune.com", request_timeout=300 # request will time out after 300 seconds)
Copy
Ask AI
import dotenv, os# change the current working directory where .env file livesos.chdir("/Users/abc/project")# load .env filedotenv.load_dotenv(".env")# setup Dune Python clientdune = DuneClient.from_env()
To avoid request timeout, you can increase the request timeout setting to be higher than the default 10 seconds in the same .env file we saved the API key information. Here is a sample .env file.
Copy
Ask AI
DUNE_API_KEY=<paste your API key here>DUNE_API_REQUEST_TIMEOUT=120
To avoid request timeout, you can increase the request timeout setting to be higher than the default 10 seconds in the same config.py file we saved the API key information. Here is a sample config.py file.
Copy
Ask AI
dune_api_key='<paste your API key here>'request_timeout=300
You can choose to either get the latest query result without triggering an execution or to trigger an execution and get the result to ensure freshest data.
Copy
Ask AI
query_result = dune.get_latest_result(3373921) # get latest result in json format # query_result = dune.get_latest_result_dataframe(3373921) # get latest result in Pandas dataframe format
Copy
Ask AI
query_result = dune.get_latest_result(3373921) # get latest result in json format # query_result = dune.get_latest_result_dataframe(3373921) # get latest result in Pandas dataframe format
Copy
Ask AI
query = QueryBase( query_id=3373921, # uncomment and change the parameter values if needed # params=[ # QueryParameter.text_type(name="contract", value="0x6B175474E89094C44Da98b954EedeAC495271d0F"), # default is DAI # QueryParameter.text_type(name="owner", value="owner"), # default using vitalik.eth's wallet # ],)query_result = dune.run_query_dataframe( query=query # , ping_frequency = 10 # uncomment to change the seconds between checking execution status, default is 1 second # , performance="large" # uncomment to run query on large engine, default is medium # , batch_size = 5_000 # uncomment to change the maximum number of rows to retrieve per batch of results, default is 32_000) # Note: to get the result in csv format, call run_query_csv(); for json format, call run_query().
To paginate query results, please visit the pagination page to get more info.
For higher level functions like run_query(), pagination is handled for you automatically behind the scene, so you will get the entire dataset as the returned result. You can pass in parameter batch_size to define the maximum number of rows per batch or pagination call.
For lower level functions like get_execution_results(), you can pass in pagination parameters limit and offset directly, as instructed here.
Copy
Ask AI
import dotenv, osfrom dune_client.types import QueryParameterfrom dune_client.client import DuneClientfrom dune_client.query import QueryBase# change the current working directory where .env file livesos.chdir("/Users/abc/project")# load .env filedotenv.load_dotenv(".env")# setup Dune Python clientdune = DuneClient.from_env()""" get the latest executed result without triggering a new execution """query_result = dune.get_latest_result(3373921) # get latest result in json format# query_result = dune.get_latest_result_dataframe(3373921) # get latest result in Pandas dataframe format""" query the query (execute and get latest result) """query = QueryBase( query_id=3373921, # uncomment and change the parameter values if needed # params=[ # QueryParameter.text_type(name="contract", value="0x6B175474E89094C44Da98b954EedeAC495271d0F"), # default is DAI # QueryParameter.text_type(name="owner", value="owner"), # default using vitalik.eth's wallet # ],)query_result = dune.run_query_dataframe( query=query # , ping_frequency = 10 # uncomment to change the seconds between checking execution status, default is 1 second # , performance="large" # uncomment to run query on large engine, default is medium # , batch_size = 5_000 # uncomment to change the maximum number of rows to retrieve per batch of results, default is 32_000) # Note: to get the result in csv format, call run_query_csv(); for json format, call run_query().