Skip to main content
Data Transformations is an Enterprise-only feature with usage-based credit consumption.
CI/CD and Credit UsageEach dbt execution consumes credits, whether triggered locally, through CI/CD, or on a schedule. The template repository ships with automated CI/CD triggers disabled by default so you can enable them when you’re ready. See CI/CD & Workflows for tips on managing costs in automated pipelines.

Pricing & Credits

Credit Consumption

Your credit usage includes three components:

Compute Credits

  • Same as Fluid Engine credits for query execution
  • Charged based on actual compute resources used
  • Depends on query complexity and execution time

Write Operations

  • Minimum 3 credits per write operation
  • Scales with data volume (GB written)
  • Applied to INSERT, MERGE, CREATE TABLE AS SELECT, etc.

Storage Credits

  • 4 credits per GB per month
  • Calculated based on end-of-day storage usage
  • Encourages efficient data management and cleanup

Maintenance Operations

  • OPTIMIZE, VACUUM, and ANALYZE operations consume credits
  • Based on compute resources and data written during maintenance
  • Can be automated with dbt post-hooks
There is no separate platform fee. You only pay for what you use through credits. See Billing for more details on credit pricing.

Best Practices

Model Organization

models/
├── staging/           # Clean and standardize raw data
├── intermediate/      # Business logic transformations  
├── marts/            # Final datasets for analytics
└── utils/            # Reusable utility models

Schema Organization

  • Use dev target with personal suffixes during development
  • Keep prod target for production deployments only
  • Consider separate schemas for different projects or domains

Performance Optimization

  • Use incremental models for large datasets
  • Partition by date fields when possible
  • Add appropriate partitions via dbt configurations
  • Run OPTIMIZE and VACUUM on large tables

Credit Management

  • Monitor credit usage in your Dune dashboard
  • Use incremental models to reduce compute and write costs
  • Drop unused development tables regularly
  • Implement table lifecycle policies

Data Management

  • Clean up temporary/test data in __tmp_ schemas
  • Document table retention requirements
  • Regularly review and optimize storage usage

Version Control

  • Store all transformation logic in Git
  • Use meaningful commit messages
  • Tag production releases
  • Review PRs before merging to main

Documentation

# schema.yml
models:
  - name: user_stats
    description: "Daily user trading statistics"
    columns:
      - name: user_address
        description: "Ethereum address of the user"
        tests:
          - not_null
          - unique
      - name: trade_count
        description: "Number of trades in the period"

Resources

Documentation

Support

Getting Started

Ready to get started? Clone the template repository and have your first dbt model running on Dune in minutes!

dbt Template Repository

Clone our official template to get started

Getting Started Guide

Step-by-step setup instructions