dbt Template Repository
Get started quickly with our official dbt template repository, featuring example models for all model strategies and CI/CD workflows.
What is dbt?
dbt (data build tool) is the industry-standard framework for analytics engineering. It enables data teams to transform raw data into analysis-ready datasets using SQL and software engineering best practices. Key capabilities:- SQL-based transformations: Write SELECT statements, dbt handles the DDL/DML
- Incremental models: Efficiently update large tables with merge, append, or delete+insert strategies
- Testing & documentation: Built-in data quality tests and auto-generated documentation
- Version control: Manage transformation logic in Git with code review workflows
- Dependency management: Define relationships between models with automatic execution ordering
dbt Documentation
Learn more about dbt’s features, best practices, and advanced capabilities in the official dbt documentation.
Why dbt on Dune?
Enterprise-Grade Transformations
- Full incremental support: Use merge, delete+insert, or append strategies for efficient updates
- Testing framework: Validate data quality with dbt’s built-in testing capabilities
- Documentation: Generate and maintain documentation alongside your transformations
- Modularity: Build reusable models and macros for complex transformation logic
Seamless Integration
- Drop-in compatibility: Works with your existing dbt projects and workflows
- Version control: Manage transformation logic in Git with PR reviews
- Production orchestration: Schedule with GitHub Actions, Airflow, Prefect, or dbt Cloud
- Private by default: Keep proprietary transformation logic within your organization
No Spellbook Dependency
- Autonomous deployment: Deploy transformations on your schedule without community review
- Proprietary logic: Keep sensitive business logic private
- Faster iteration: Test and deploy changes immediately
Quick Start
The fastest way to get started is using our dbt template repository, which includes pre-configured profiles, sample models, and CI/CD workflows.Prerequisites
- Dune Enterprise account with Data Transformations enabled
- Dune API key (generate one here)
- Team name on Dune (defines your namespace)
- dbt installed locally (we recommend using
uvfor dependency management)
Connection Details
Connect to Dune using these parameters:| Parameter | Value |
|---|---|
| Host | trino.api.dune.com |
| Port | 443 |
| Protocol | HTTPS |
| Catalog | dune (required) |
| Authentication | JWT (use your Dune API key) |
| Session Property | transformations=true (required for write operations) |
1. Use the Template Repository
We provide a complete dbt project template to get started quickly: GitHub Template: github.com/duneanalytics/dune-dbt-template The template includes:- Pre-configured dbt profiles for dev and prod environments
- Sample models demonstrating all model types
- GitHub Actions workflows for CI/CD
- Cursor AI rules for dbt best practices on Dune
- Example project structure following dbt conventions
2. Configure Environment Variables
Set these required environment variables:3. Configure dbt Profile
Yourprofiles.yml should look like this:
The
transformations: true session property is required. This tells Dune that you’re running data transformation operations that need write access.4. Test Your Connection
How It Works
Namespace Isolation
All tables and views you create are organized into your team’s namespace:- Production schema:
{your_team}- For production tables - Development schemas:
{your_team}__tmp_*- For development and testing
Write Operations
Execute SQL statements to create and manage your data:- Create tables and views in your namespace
- Insert, update, or merge data using standard SQL
- Drop tables when no longer needed
- Optimize and vacuum tables for optimal performance when querying these tables
Data Access
What You Can Read:- All public Dune datasets: Full access to blockchain data across all supported chains
- Your uploaded data: Private datasets you’ve uploaded to Dune
- Your transformation outputs: Tables and views created in your namespace
- Materialized views: Views that are materialized as tables in your namespace via the APP
- Your team namespace:
{team_name}for production tables - Development namespaces:
{team_name}__tmp_*for dev and testing - Private by default: All created tables are private unless explicitly made public
- Write operations are restricted to your team’s namespaces only
- Cannot write to public schemas or other teams’ namespaces
- Schema naming rules enforced: no
__tmp_in team handles
Project Structure
The template repository follows standard dbt conventions:Schema Organization
Schemas are automatically organized based on your dbt target:| Target | DEV_SCHEMA_SUFFIX | Schema Name | Use Case |
|---|---|---|---|
dev | Not set | {team}__tmp_ | Local development (default) |
dev | Set to alice | {team}__tmp_alice | Personal dev space |
dev | Set to pr123 | {team}__tmp_pr123 | CI/CD per PR |
prod | (any) | {team} | Production tables |
get_custom_schema.sql macro in the template.
Development Workflow
Local Development
-
Create a feature branch:
-
Develop models locally:
-
Query your tables on Dune:
- Remember to use the
dune.catalog prefix:
- Remember to use the
Pull Request Workflow
-
Push changes and open PR:
-
Automated CI runs:
- CI enforces that branch is up-to-date with main
- Runs modified models with
--full-refreshin isolated schema{team}__tmp_pr{number} - Runs tests on modified models
- Tests incremental run logic
-
Team review:
- Review transformation logic in GitHub
- Check CI results
- Approve and merge when ready
Production Deployment
The production workflow includes an hourly schedule (0 * * * *), but it’s commented out by default. You can enable it by uncommenting the corresponding lines when you’re ready to run production jobs automatically.- State comparison: Uses manifest from previous run to detect changes
- Full refresh modified models: Any changed models run with
--full-refresh - Incremental run: All models run with normal incremental logic
- Testing: All models are tested
- Notification: Email sent on failure
Incremental Model Strategies
dbt supports multiple strategies for incremental models. The template includes examples of each:1. Merge Strategy (Recommended)
When to use: When you need to update existing rows and insert new ones. Example:2. Delete+Insert Strategy
When to use: When recomputing entire partitions (e.g., daily aggregations). Example:3. Append Strategy
When to use: For immutable event logs that only need new rows appended. Example:CI/CD with GitHub Actions
The template includes two GitHub Actions workflows:CI Workflow (.github/workflows/ci.yml)
Runs on every pull request:
DUNE_API_KEY
DUNE_TEAM_NAME
Production Workflow (.github/workflows/prod.yml)
Runs hourly on main branch:
Table Maintenance
Maintenance operations consume credits based on compute and data written. These operations are neccessary to keep your tables performant and to reclaim storage space.Manual Maintenance
Run OPTIMIZE and VACUUM to improve performance and reduce storage costs:Automated Maintenance with dbt post-hooks
Add post-hooks to your model configuration:Project level post-hooks
Add post-hooks to your project configuration:Dropping Tables
dbt doesn’t have a built-in way to drop tables. Options:Option 1: Use dbt’s —full-refresh flag then remove the model
Option 2: Connect with a SQL client
Use any Trino-compatible client (Hex, Jupyter, DBeaver) to execute:Querying dbt Models on Dune
Pattern:dune.{schema}.{table}
dune. when using queries in the Dune app.
Where Your Data Appears
Tables and views created through dbt appear in the Data Explorer under: My Data → Connectors
- Browse your transformation datasets
- View table schemas and metadata
- Delete datasets directly from the UI
- Search and reference them in queries
Pricing & Credits
Data Transformations is an Enterprise-only feature with usage-based credit consumption.Credit Consumption
Your credit usage includes three components:Compute Credits
- Same as Fluid Engine credits for query execution
- Charged based on actual compute resources used
- Depends on query complexity and execution time
Write Operations
- Minimum 3 credits per write operation
- Scales with data volume (GB written)
- Applied to INSERT, MERGE, CREATE TABLE AS SELECT, etc.
Storage Credits
- 4 credits per GB per month
- Calculated based on end-of-day storage usage
- Encourages efficient data management and cleanup
Maintenance Operations
- OPTIMIZE, VACUUM, and ANALYZE operations consume credits
- Based on compute resources and data written during maintenance
- Can be automated with dbt post-hooks
There is no separate platform fee—you only pay for what you use through credits. See Billing for more details on credit pricing.
Use Cases
Enterprise Data Pipelines
Add Dune to your existing data infrastructure without reworking your workflows:- Drop-in compatibility: Integrate with your current dbt projects, Airflow DAGs, or Prefect flows
- Full incremental support: Use merge, delete+insert, or append strategies for efficient updates
- Production orchestration: Schedule with the tools you already use (GitHub Actions, Airflow, Prefect)
- Version controlled: Keep all transformation logic in Git alongside your other data pipelines
Governance & Compliance
Meet enterprise requirements for data control and auditability:- Private by default: All datasets remain private to your team unless explicitly shared
- Audit trails: Track every transformation through Git history and PR workflows
- Data lineage: Maintain clear lineage from raw data through transformations to analytics
- Review processes: Implement PR reviews and approval workflows before deploying to production
- Access control: Restrict write access to specific teams and namespaces
Complex Analytics Workflows
Build sophisticated multi-stage data pipelines:- Read from Dune’s comprehensive blockchain datasets across all chains
- Transform and enrich with your proprietary business logic
- Create reusable intermediate datasets for downstream analytics
- Chain multiple transformations into complex data products
Alternative to Spellbook
Build and maintain custom datasets without community review processes:- Deploy transformations on your own schedule
- Keep proprietary logic private to your organization
- Faster iteration cycles without PR review delays
Troubleshooting
Connection Issues
Problem:dbt debug fails with connection error.
Solution:
- Verify
DUNE_API_KEYandDUNE_TEAM_NAMEare set correctly - Check that you have Data Transformations enabled for your team
- Ensure
transformations: trueis in session properties
Models Not Appearing in Dune
Problem: Can’t find tables in Data Explorer or queries. Solution:- Check the Connectors section in Data Explorer under “My Data”
- Remember to use
dune.catalog prefix in queries - Verify the table was created in the correct schema
Incremental Models Not Working
Problem: Incremental models always do full refresh. Solution:- Check that
is_incremental()macro is used correctly - Verify the
unique_keyconfiguration matches your table structure - Ensure the target table exists before running incrementally
CI/CD Failures
Problem: GitHub Actions failing. Solution:- Verify secrets and variables are set correctly in GitHub
- Check that branch is up-to-date with main
- Review workflow logs for specific errors
Limitations
Metadata Discovery
Limited support for some metadata discovery queries likeSHOW TABLES or SHOW SCHEMAS in certain contexts. This may affect autocomplete in some BI tools.
Workaround: Use the Data Explorer or query information_schema directly.
Result Set Size
Large result sets may timeout. Consider:- Paginating with
LIMITandOFFSET - Narrowing filters to reduce data volume
- Breaking complex queries into smaller parts
Read-After-Write Consistency
Tables and views are available for querying immediately after creation, but catalog caching may cause brief delays (typically < 60 seconds) before appearing in some listing operations.Rate Limits
Rate limits for Data Transformations align with the Dune Analytics API:- Requests are subject to the same rate limiting as API executions
- Large query operations run on the Large Query Engine tier
- See Rate Limits for detailed information
Best Practices
Model Organization
Schema Organization
- Use
devtarget with personal suffixes during development - Keep
prodtarget for production deployments only - Consider separate schemas for different projects or domains
Performance Optimization
- Use incremental models for large datasets
- Partition by date fields when possible
- Add appropriate partitions via dbt configurations
- Run OPTIMIZE and VACUUM on large tables
Credit Management
- Monitor credit usage in your Dune dashboard
- Use incremental models to reduce compute and write costs
- Drop unused development tables regularly
- Implement table lifecycle policies
Data Management
- Clean up temporary/test data in
__tmp_schemas - Document table retention requirements
- Regularly review and optimize storage usage
Version Control
- Store all transformation logic in Git
- Use meaningful commit messages
- Tag production releases
- Review PRs before merging to main
Documentation
Examples
Complete examples are available in the template repository:- View Model: Lightweight, always fresh data
- Table Model: Static snapshots for specific points in time
- Merge Incremental: Update existing rows, insert new ones
- Delete+Insert Incremental: Recompute partitions efficiently
- Append Incremental: Add-only with deduplication
Resources
Documentation
- dbt Template Repository
- SQL Operations Reference
- Query Engine Overview - DuneSQL capabilities and read operations
- Trino Connector - General Trino connection guide
- dbt Documentation
- dbt-trino Setup Guide
Support
- Dune Discord: discord.gg/ErrzwBz
- Enterprise Support: Contact your account team
- GitHub Issues: Report issues in the template repository
Next Steps
- Try the template: Use the dune-dbt-template to create your new repository
- Build your first model: Use the examples as a starting point
- Set up CI/CD: Configure GitHub Actions for automated testing
- Deploy to production: Schedule your dbt runs and start building