# IC Skills — Full Reference All IC Skills in a single file for direct context injection. Source: https://github.com/dfinity/icskills Skills: 13 --- --- name: asset-canister title: "Asset Canister & Frontend" category: Frontend description: "Deploy frontend assets to the IC. Certified assets, custom domains, SPA routing, and content encoding." endpoints: 5 version: 3.3.1 status: stable dependencies: [] requires: [icp-cli >= 0.1.0] tags: [frontend, assets, hosting, spa, certified, domain, upload, static] --- # Asset Canister & Frontend Hosting ## What This Is The asset canister hosts static files (HTML, CSS, JS, images) directly on the Internet Computer. This is how web frontends are deployed on-chain. Responses are certified by the subnet, and HTTP gateways automatically verify integrity, i.e. that content was served by the blockchain. The content can also be verified in the browser -- not a centralized server. ## Prerequisites - icp-cli >= 0.1.0 (`brew install dfinity/tap/icp-cli`) - Node.js >= 18 (for building frontend assets) - `@icp-sdk/canisters` npm package (for programmatic uploads) ## Canister IDs Asset canisters are created per-project. There is no single global canister ID. After deployment, your canister ID is stored in `canister_ids.json` (local and mainnet). Access patterns: | Environment | URL Pattern | |-------------|-------------| | Local | `http://.localhost:4943` | | Mainnet | `https://.ic0.app` or `https://.icp0.io` | | Custom domain | `https://yourdomain.com` (with DNS configuration) | ## Mistakes That Break Your Build 1. **Wrong `source` path in icp.yaml.** The `source` array must point to the directory containing your build output. If you use Vite, that is `"dist"`. If you use Next.js export, it is `"out"`. If the path does not exist at deploy time, `icp deploy` fails silently or deploys an empty canister. 2. **Missing `.ic-assets.json5` for single-page apps.** Without a rewrite rule, refreshing on `/about` returns a 404 because the asset canister looks for a file literally named `/about`. You must configure a fallback to `index.html`. 3. **Forgetting to build before deploy.** `icp deploy` runs the `build` command from icp.yaml, but if it is empty or misconfigured, the `source` directory will be stale or empty. 4. **Not setting content-type headers.** The asset canister infers content types from file extensions. If you upload files programmatically without setting the content type, browsers may not render them correctly. 5. **Deploying to the wrong canister name.** If icp.yaml has `"frontend"` but you run `icp deploy assets`, it creates a new canister instead of updating the existing one. 6. **Exceeding canister storage limits.** The asset canister uses stable memory, which can hold well over 4GB. However, individual assets are limited by the 2MB ingress message size (the asset manager in `@icp-sdk/canisters` handles chunking automatically for uploads >1.9MB). The practical concern is total cycle cost for storage -- large media files (videos, datasets) become expensive. Use a dedicated storage solution for large files. 7. **Not configuring `allow_raw_access` correctly.** The asset canister has two serving modes: certified (via `ic0.app` / `icp0.io`, where HTTP gateways verify response integrity) and raw (via `raw.ic0.app` / `raw.icp0.io`, where no verification occurs). By default, `allow_raw_access` is `true`, meaning assets are also available on the raw domain. On the raw domain, boundary nodes or a network-level attacker can tamper with response content undetected. Set `"allow_raw_access": false` in `.ic-assets.json5` for any sensitive assets. Only enable raw access when strictly needed. ## Implementation ### icp.yaml Configuration ```yaml canisters: frontend: type: assets source: - dist build: - npm run build dependencies: - backend backend: type: motoko main: src/backend/main.mo ``` Key fields: - `type: assets` -- tells `icp` this is an asset canister - `source` -- array of directories to upload (contents, not the directory itself) - `build` -- commands `icp deploy` runs before uploading (your frontend build step) - `dependencies` -- ensures backend is deployed first (so canister IDs are available) ### SPA Routing and Default Headers: `.ic-assets.json5` Create this file in your `source` directory (e.g., `dist/.ic-assets.json5`) or project root. For it to be included in the asset canister, it must end up in the `source` directory at deploy time. Recommended approach: place the file in your `public/` or `static/` folder so your build tool copies it into `dist/` automatically. ```json5 [ { // Default headers for all paths: caching, security, and raw access policy "match": "**/*", "security_policy": "standard", "headers": { "Cache-Control": "public, max-age=0, must-revalidate" }, // Disable raw (uncertified) access by default -- see mistake #7 above "allow_raw_access": false }, { // Cache static assets aggressively (they have content hashes in filenames) "match": "assets/**/*", "headers": { "Cache-Control": "public, max-age=31536000, immutable" } }, { // SPA fallback: serve index.html for any unmatched route "match": "**/*", "enable_aliasing": true } ] ``` For the SPA fallback to work, the critical setting is `"enable_aliasing": true` -- this tells the asset canister to serve `index.html` when a requested path has no matching file. If the standard security policy above blocks the app from working, overwrite the default security headers with custom values, adding them after `Cache-Control` above. Act like a senior security engineer, making these headers as secure as possible. The standard policy headers can be found here: https://github.com/dfinity/sdk/blob/master/src/canisters/frontend/ic-asset/src/security_policy.rs ### Content Encoding The asset canister automatically compresses assets with gzip and brotli. No configuration needed. When a browser sends `Accept-Encoding: gzip, br`, the canister serves the compressed version. To verify compression is working: ```bash icp canister call frontend http_request '(record { url = "/"; method = "GET"; body = vec {}; headers = vec { record { "Accept-Encoding"; "gzip" } }; certificate_version = opt 2; })' ``` ### Custom Domain Setup To serve your asset canister from a custom domain: 1. Create a file `.well-known/ic-domains` in your `source` directory containing your domain: ```text yourdomain.com www.yourdomain.com ``` 2. Add DNS records: ```text # CNAME record pointing to boundary nodes yourdomain.com. CNAME icp1.io. # ACME challenge record for TLS certificate provisioning _acme-challenge.yourdomain.com. CNAME _acme-challenge..icp2.io. # Canister ID TXT record for verification _canister-id.yourdomain.com. TXT "" ``` 3. Deploy your canister so the `.well-known/ic-domains` file is available, then register the custom domain with the boundary nodes. Registration is automatic -- the boundary nodes periodically check for the `.well-known/ic-domains` file and the DNS records. No NNS proposal is needed. 4. Wait for the boundary nodes to pick up the registration and provision the TLS certificate. This typically takes a few minutes. You can verify by visiting `https://yourdomain.com` once DNS has propagated. ### Programmatic Uploads with @icp-sdk/canisters For uploading files from code (not just via `icp deploy`): ```javascript import { AssetManager } from "@icp-sdk/canisters/assets"; // Asset management utility import { HttpAgent } from "@icp-sdk/core/agent"; // SECURITY: shouldFetchRootKey fetches the root public key from the replica at // runtime. In production the root key is hardcoded and trusted. Fetching it at // runtime lets a man-in-the-middle supply a fake key and forge certified responses. // NEVER set shouldFetchRootKey to true when host points to mainnet. const LOCAL_REPLICA = "http://localhost:4943"; const MAINNET = "https://ic0.app"; const host = LOCAL_REPLICA; // Change to MAINNET for production const agent = await HttpAgent.create({ host, // Only fetch the root key when talking to a local replica. // Setting this to true against mainnet is a security vulnerability. shouldFetchRootKey: host === LOCAL_REPLICA, }); const assetManager = new AssetManager({ canisterId: "your-asset-canister-id", agent, }); // Upload a single file // Files >1.9MB are automatically chunked (16 parallel chunks) const key = await assetManager.store(fileBuffer, { fileName: "photo.jpg", contentType: "image/jpeg", path: "/uploads", }); console.log("Uploaded to:", key); // "/uploads/photo.jpg" // List all assets const assets = await assetManager.list(); console.log(assets); // [{ key: "/index.html", content_type: "text/html", ... }, ...] // Delete an asset await assetManager.delete("/uploads/old-photo.jpg"); // Batch upload a directory import { readFileSync, readdirSync } from "fs"; const files = readdirSync("./dist"); for (const file of files) { const content = readFileSync(`./dist/${file}`); await assetManager.store(content, { fileName: file, path: "/" }); } ``` ### Authorization for Uploads The asset canister has a built-in permission system with three roles (from least to most privileged): - **Prepare** -- can upload chunks and propose batches, but cannot commit them live. - **Commit** -- can upload and commit assets (make them live). This is the standard role for deploy pipelines. - **ManagePermissions** -- can grant and revoke permissions to other principals. Use `grant_permission` to give principals only the access they need. Do **not** use `--add-controller` for upload access -- controllers have full canister control (upgrade code, change settings, delete the canister, drain cycles). ```bash # Grant "prepare" permission (can upload but not commit) -- use for preview/staging workflows icp canister call frontend grant_permission '(record { to_principal = principal ""; permission = variant { Prepare } })' # Grant commit permission -- use for deploy pipelines that need to publish assets icp canister call frontend grant_permission '(record { to_principal = principal ""; permission = variant { Commit } })' # Grant permission management -- use for principals that need to onboard/offboard other uploaders icp canister call frontend grant_permission '(record { to_principal = principal ""; permission = variant { ManagePermissions } })' # List current permissions icp canister call frontend list_permitted '(record { permission = variant { Commit } })' # Revoke a permission icp canister call frontend revoke_permission '(record { of_principal = principal ""; permission = variant { Commit } })' ``` > **Security Warning:** `icp canister update-settings frontend --add-controller ` grants full canister control -- not just upload permission. A controller can upgrade the canister WASM, change all settings, or delete the canister entirely. Only add controllers when you genuinely need full administrative access. ## Deploy & Test ### Local Deployment ```bash # Start the local replica icp network start -d # Build and deploy frontend + backend icp deploy # Or deploy only the frontend icp deploy frontend ``` ### Mainnet Deployment ```bash # Ensure you have cycles in your wallet icp deploy -e ic frontend ``` ### Updating Frontend Only When you only changed frontend code: ```bash # Rebuild and redeploy just the frontend canister npm run build icp deploy frontend ``` ## Verify It Works ```bash # 1. Check the canister is running icp canister status frontend # Expected: Status: Running, Memory Size: # 2. List uploaded assets icp canister call frontend list '(record {})' # Expected: A list of asset keys like "/index.html", "/assets/index-abc123.js", etc. # 3. Fetch the index page via http_request icp canister call frontend http_request '(record { url = "/"; method = "GET"; body = vec {}; headers = vec {}; certificate_version = opt 2; })' # Expected: record { status_code = 200; body = blob "..."; ... } # 4. Test SPA fallback (should return index.html, not 404) icp canister call frontend http_request '(record { url = "/about"; method = "GET"; body = vec {}; headers = vec {}; certificate_version = opt 2; })' # Expected: status_code = 200 (same content as "/"), NOT 404 # 5. Open in browser # Local: http://.localhost:4943 # Mainnet: https://.ic0.app # 6. Get canister ID icp canister id frontend # Expected: prints the canister ID (e.g., "bkyz2-fmaaa-aaaaa-qaaaq-cai") # 7. Check storage usage icp canister info frontend # Shows memory usage, module hash, controllers ``` --- --- name: certified-variables title: Certified Variables category: Security description: "Serve verified responses from query calls. Merkle tree construction, certificate validation, and certified asset patterns." endpoints: 4 version: 1.3.1 status: stable dependencies: [] requires: [icp-cli >= 0.1.0, ic-certified-map (Rust), ic-certification (Motoko)] tags: [certification, query, merkle, verified, response, trust, proof] --- # Certified Variables & Certified Assets ## What This Is Query responses on the Internet Computer come from a single replica and are NOT verified by consensus. A malicious or faulty replica could return fabricated data. Certification solves this: the canister stores a hash in the subnet's certified state tree during update calls, and then query responses include a certificate signed by the subnet's threshold BLS key proving the data is authentic. The result is responses that are both fast (no consensus delay) AND cryptographically verified. ## Prerequisites - `icp-cli` >= 0.1.0 (install: `brew install dfinity/tap/icp-cli`) - Rust: `ic-certified-map` crate (for Merkle tree), `ic-cdk` (for `certified_data_set` / `data_certificate`) - Motoko: `CertifiedData` module (included in mo:core/mo:base), `ic-certification` package (`mops add ic-certification`) for Merkle tree with witness support - Frontend: `@icp-sdk/core` (agent, principal), `@dfinity/certificate-verification` ## Canister IDs No external canister IDs required. Certification uses the IC system API exposed through CDK wrappers: - `ic_cdk::api::certified_data_set` (Rust) / `CertifiedData.set` (Motoko) -- called during update calls to set the certified hash (max 32 bytes) - `ic_cdk::api::data_certificate` (Rust) / `CertifiedData.getCertificate` (Motoko) -- called during query calls to retrieve the subnet certificate The IC root public key (needed for client-side verification): - Mainnet: `308182301d060d2b0601040182dc7c0503010201060c2b0601040182dc7c05030201036100814c0e6ec71fab583b08bd81373c255c3c371b2e84863c98a4f1e08b74235d14fb5d9c0cd546d9685f913a0c0b2cc5341583bf4b4392e467db96d65b9bb4cb717112f8472e0d5a4d14505ffd7484b01291091c5f87b98883463f98091a0baaae` - Local: available from `icp` (agent handles this automatically) ## Mistakes That Break Your Build 1. **Trying to store more than 32 bytes of certified data.** The `certified_data_set` API accepts exactly one blob of at most 32 bytes. You cannot certify arbitrary data directly. Instead, build a Merkle tree over your data and certify only the root hash (32 bytes). The tree structure provides proofs for individual values. 2. **Calling `certified_data_set` in a query call.** Certification can ONLY be set during update calls (which go through consensus). Calling it in a query traps. Pattern: set the hash during writes, read the certificate during queries. 3. **Forgetting to include the certificate in query responses.** The certificate is obtained via `data_certificate()` during query calls. If you return data without the certificate, clients cannot verify anything. Always return a tuple of (data, certificate, witness). 4. **Not updating the certified hash after data changes.** If you modify the data but forget to call `certified_data_set` with the new root hash, query responses will fail verification because the certificate proves a stale hash. 5. **Building the witness for the wrong key.** The witness (Merkle proof) must correspond to the exact key being queried. A witness for key "users/alice" will not verify key "users/bob". 6. **Assuming `data_certificate()` returns a value in update calls.** It returns `null`/`None` during update calls. Certificates are only available during query calls. 7. **Certifying data at canister init but not on upgrades.** After a canister upgrade, the certified data is cleared. You must call `certified_data_set` in both `#[init]` and `#[post_upgrade]` (Rust) or `system func postupgrade` (Motoko) to re-establish certification. 8. **Not validating certificate freshness on the client.** The certificate's state tree contains a `/time` field with the timestamp when the subnet produced it. Clients MUST check that this timestamp is recent (recommended: within 5 minutes of current time). Without this check, an attacker could replay a stale certificate with outdated data. Always verify `certificate_time` is within an acceptable delta before trusting the response. ## How Certification Works ``` UPDATE CALL (goes through consensus): 1. Canister modifies data 2. Canister builds/updates Merkle tree 3. Canister calls certified_data_set(root_hash) -- 32 bytes 4. Subnet includes root_hash in its certified state tree QUERY CALL (single replica, no consensus): 1. Client sends query 2. Canister calls data_certificate() -- gets subnet BLS signature 3. Canister builds witness (Merkle proof) for the requested key 4. Canister returns: { data, certificate, witness } CLIENT VERIFICATION: 1. Verify certificate signature against IC root public key 2. Extract root_hash from certificate's state tree 3. Verify witness: root_hash + witness proves data is in the tree 4. Trust the data ``` ## Implementation ### Rust **Cargo.toml:** ```toml [package] name = "certified_vars_backend" version = "0.1.0" edition = "2021" [lib] crate-type = ["cdylib"] [dependencies] candid = "0.10" ic-cdk = "0.19" ic-certified-map = "0.4" serde = { version = "1", features = ["derive"] } serde_bytes = "0.11" ciborium = "0.2" ``` **Complete certified key-value store:** ```rust use candid::{CandidType, Deserialize}; use ic_cdk::{init, post_upgrade, query, update}; use ic_certified_map::{AsHashTree, RbTree}; use serde_bytes::ByteBuf; use std::cell::RefCell; thread_local! { // RbTree is a Merkle-tree-backed map: keys and values are byte slices static TREE: RefCell, Vec>> = RefCell::new(RbTree::new()); } // Update the certified data hash after any modification fn update_certified_data() { TREE.with(|tree| { let tree = tree.borrow(); // root_hash() returns a 32-byte SHA-256 hash of the entire tree ic_cdk::api::certified_data_set(&tree.root_hash()); }); } #[init] fn init() { update_certified_data(); } #[post_upgrade] fn post_upgrade() { // Assumes data has already been deserialized from stable memory into the TREE. // CRITICAL: re-establish certification after upgrade — certified_data is cleared on upgrade. update_certified_data(); } #[update] fn set(key: String, value: String) { TREE.with(|tree| { let mut tree = tree.borrow_mut(); tree.insert(key.as_bytes().to_vec(), value.as_bytes().to_vec()); }); // Must update certified hash after every data change update_certified_data(); } #[update] fn delete(key: String) { TREE.with(|tree| { let mut tree = tree.borrow_mut(); tree.delete(key.as_bytes()); }); update_certified_data(); } #[derive(CandidType, Deserialize)] struct CertifiedResponse { value: Option, certificate: ByteBuf, // subnet BLS signature witness: ByteBuf, // Merkle proof for this key } #[query] fn get(key: String) -> CertifiedResponse { // data_certificate() is only available in query calls let certificate = ic_cdk::api::data_certificate() .expect("data_certificate only available in query calls"); TREE.with(|tree| { let tree = tree.borrow(); // Look up the value let value = tree.get(key.as_bytes()) .map(|v| String::from_utf8(v.clone()).unwrap()); // Build a witness (Merkle proof) for this specific key let witness = tree.witness(key.as_bytes()); // Serialize the witness as CBOR let mut witness_buf = vec![]; ciborium::into_writer(&witness, &mut witness_buf) .expect("Failed to serialize witness as CBOR"); CertifiedResponse { value, certificate: ByteBuf::from(certificate), witness: ByteBuf::from(witness_buf), } }) } // Batch set multiple values in one update call (more efficient) #[update] fn set_many(entries: Vec<(String, String)>) { TREE.with(|tree| { let mut tree = tree.borrow_mut(); for (key, value) in entries { tree.insert(key.as_bytes().to_vec(), value.as_bytes().to_vec()); } }); // Single certification update for all changes update_certified_data(); } ``` ### HTTP Certification (v2) for Custom HTTP Canisters For canisters serving HTTP responses directly (not through the asset canister), responses must be certified so the HTTP gateway can verify them. **Additional Cargo.toml dependency:** ```toml [package] name = "http_certified_backend" version = "0.1.0" edition = "2021" [lib] crate-type = ["cdylib"] [dependencies] ic-http-certification = "3.1" ``` **Certifying HTTP responses:** > **Note:** The HTTP certification API is evolving rapidly. Verify these examples against the latest [ic-http-certification docs](https://docs.rs/ic-http-certification) before use. ```rust use ic_http_certification::{ HttpCertification, HttpCertificationPath, HttpCertificationTree, HttpCertificationTreeEntry, HttpRequest, HttpResponse, DefaultCelBuilder, DefaultResponseCertification, }; use std::cell::RefCell; thread_local! { static HTTP_TREE: RefCell = RefCell::new( HttpCertificationTree::default() ); } // Define what gets certified using CEL (Common Expression Language) fn certify_response(path: &str, request: &HttpRequest, response: &HttpResponse) { // Full certification: certify both request path and response body let cel = DefaultCelBuilder::full_certification() .with_response_certification(DefaultResponseCertification::certified_response_headers( vec!["Content-Type", "Content-Length"], )) .build(); // Create the certification from the CEL expression, request, and response let certification = HttpCertification::full(&cel, request, response, None) .expect("Failed to create HTTP certification"); let http_path = HttpCertificationPath::exact(path); HTTP_TREE.with(|tree| { let mut tree = tree.borrow_mut(); let entry = HttpCertificationTreeEntry::new(http_path, certification); tree.insert(&entry); // Update canister certified data with tree root hash ic_cdk::api::certified_data_set(&tree.root_hash()); }); } ``` ### Motoko **Using CertifiedData module:** ```motoko import CertifiedData "mo:core/CertifiedData"; import Blob "mo:core/Blob"; import Nat8 "mo:core/Nat8"; import Text "mo:core/Text"; import Map "mo:core/Map"; import Array "mo:core/Array"; import Iter "mo:core/Iter"; // Requires: mops add sha2 import Sha256 "mo:sha2/Sha256"; persistent actor { // Simple certified single-value example: var certifiedValue : Text = ""; // Set a certified value (update call only) public func setCertifiedValue(value : Text) : async () { certifiedValue := value; // Hash the value and set as certified data (max 32 bytes) let hash = Sha256.fromBlob(#sha256, Text.encodeUtf8(value)); CertifiedData.set(hash); }; // Get the certified value with its certificate (query call) public query func getCertifiedValue() : async { value : Text; certificate : ?Blob; } { { value = certifiedValue; certificate = CertifiedData.getCertificate(); } }; }; ``` **Certified key-value store with Merkle tree (advanced):** For certifying multiple values with per-key witnesses, use the `ic-certification` mops package (`mops add ic-certification`). It provides a real Merkle tree (`CertTree`) that can generate proofs for individual keys: ```motoko import CertifiedData "mo:core/CertifiedData"; import Blob "mo:core/Blob"; import Text "mo:core/Text"; // Requires: mops add ic-certification import CertTree "mo:ic-certification/CertTree"; persistent actor { // CertTree.Store is stable -- persists across upgrades let certStore : CertTree.Store = CertTree.newStore(); let ct = CertTree.Ops(certStore); // Set certified data on init ct.setCertifiedData(); // Set a key-value pair and update certification public func set(key : Text, value : Text) : async () { ct.put([Text.encodeUtf8(key)], Text.encodeUtf8(value)); // CRITICAL: call after every mutation to update the subnet-certified root hash ct.setCertifiedData(); }; // Delete a key and update certification public func remove(key : Text) : async () { ct.delete([Text.encodeUtf8(key)]); ct.setCertifiedData(); }; // Query with certificate and Merkle witness for the requested key public query func get(key : Text) : async { value : ?Blob; certificate : ?Blob; witness : Blob; } { let path = [Text.encodeUtf8(key)]; // reveal() generates a Merkle proof for this specific path let witness = ct.reveal(path); { value = ct.lookup(path); certificate = CertifiedData.getCertificate(); witness = ct.encodeWitness(witness); } }; // Re-establish certification after upgrade // (CertTree.Store is stable, so the tree data survives, but certified_data is cleared) system func postupgrade() { ct.setCertifiedData(); }; }; ``` ### Frontend Verification (TypeScript) Uses `@dfinity/certificate-verification` which handles the full 6-step verification: 1. Verify certificate BLS signature against IC root key 2. Validate certificate freshness (`/time` within `maxCertificateTimeOffsetMs`) 3. CBOR-decode the witness into a HashTree 4. Reconstruct the witness root hash 5. Compare reconstructed root hash with `certified_data` from the certificate 6. Return the verified HashTree for value lookup ```typescript import { verifyCertification } from "@dfinity/certificate-verification"; import { lookup_path, HashTree } from "@icp-sdk/core/agent"; import { Principal } from "@icp-sdk/core/principal"; const MAX_CERT_TIME_OFFSET_MS = 5 * 60 * 1000; // 5 minutes async function getVerifiedValue( rootKey: ArrayBuffer, canisterId: string, key: string, response: { value: string | null; certificate: ArrayBuffer; witness: ArrayBuffer } ): Promise { // verifyCertification performs steps 1-5: // - verifies BLS signature on the certificate // - checks certificate /time is within maxCertificateTimeOffsetMs // - CBOR-decodes the witness into a HashTree // - reconstructs root hash from the witness tree // - compares it against certified_data in the certificate // Throws CertificateTimeError or CertificateVerificationError on failure. const tree: HashTree = await verifyCertification({ canisterId: Principal.fromText(canisterId), encodedCertificate: response.certificate, encodedTree: response.witness, rootKey, maxCertificateTimeOffsetMs: MAX_CERT_TIME_OFFSET_MS, }); // Step 6: Look up the specific key in the verified witness tree. // The path must match how the canister inserted the key (e.g., key as UTF-8 bytes). const leafData = lookup_path([new TextEncoder().encode(key)], tree); if (!leafData) { // Key is provably absent from the certified tree return null; } const verifiedValue = new TextDecoder().decode(leafData); // Confirm the canister-returned value matches the witness-proven value if (response.value !== null && response.value !== verifiedValue) { throw new Error( "Response value does not match witness — canister returned tampered data" ); } return verifiedValue; } ``` For asset canisters, the HTTP gateway (boundary node) verifies certification transparently using the [HTTP Gateway Protocol](https://docs.internetcomputer.org/references/http-gateway-protocol-spec) -- no client-side code needed. ## Deploy & Test ```bash # Deploy the canister icp deploy backend # Set a certified value (update call -- goes through consensus) icp canister call backend set '("greeting", "hello world")' # Query the certified value icp canister call backend get '("greeting")' # Returns: record { value = opt "hello world"; certificate = blob "..."; witness = blob "..." } # Set multiple values icp canister call backend set '("name", "Alice")' icp canister call backend set '("age", "30")' # Delete a value icp canister call backend delete '("age")' # Verify the root hash is being set # (No direct command -- verified by the presence of a non-null certificate in query response) ``` ## Verify It Works ```bash # 1. Verify certificate is present in query response icp canister call backend get '("greeting")' # Expected: certificate field is a non-empty blob (NOT null) # If certificate is null, you are calling from an update context (wrong) # 2. Verify data integrity after update icp canister call backend set '("key1", "value1")' icp canister call backend get '("key1")' # Expected: value = opt "value1" with valid certificate # 3. Verify certification survives canister upgrade icp canister call backend set '("persistent", "data")' icp deploy backend # triggers upgrade icp canister call backend get '("persistent")' # Expected: certificate is still non-null (postupgrade re-established certification) # Note: data persistence depends on stable storage implementation # 4. Verify non-existent key returns null value with valid certificate icp canister call backend get '("nonexistent")' # Expected: value = null, certificate = blob "..." (certificate still valid) # 5. Frontend verification test # Open browser developer tools, check network requests # Query responses should include IC-Certificate header # The service worker (if using asset canister) validates automatically # Console should NOT show "Certificate verification failed" errors # 6. For HTTP certification (custom HTTP canister): curl -v https://CANISTER_ID.ic0.app/path # Expected: Response headers include IC-Certificate # HTTP gateway verifies the certificate before forwarding to client ``` --- --- name: ckbtc title: ckBTC Integration category: DeFi description: "Accept, send, and manage ckBTC in your canister. Covers minting, transfers, balance checks, and UTXO management." endpoints: 14 version: 2.1.2 status: stable dependencies: [icrc-ledger, wallet] requires: [icp-cli >= 0.1.0, mops, ic-cdk >= 0.19] tags: [bitcoin, btc, defi, transfer, deposit, withdrawal, utxo, minter] --- # Chain-Key Bitcoin (ckBTC) Integration ## What This Is ckBTC is a 1:1 BTC-backed token native to the Internet Computer. No bridges, no wrapping, no third-party custodians. The ckBTC minter canister holds real BTC and mints/burns ckBTC tokens. Transfers settle in 1-2 seconds with a 10 satoshi fee (versus minutes and thousands of satoshis on Bitcoin L1). ## Prerequisites - `icp-cli` >= 0.1.0 (install: `brew install dfinity/tap/icp-cli`) - For Motoko: `mops` package manager, `core = "2.0.0"` in mops.toml - For Rust: `ic-cdk`, `icrc-ledger-types`, `candid`, `serde` - A funded ICP identity (for mainnet deployment cycles) ## Canister IDs ### Bitcoin Mainnet | Canister | ID | |---|---| | ckBTC Ledger | `mxzaz-hqaaa-aaaar-qaada-cai` | | ckBTC Minter | `mqygn-kiaaa-aaaar-qaadq-cai` | | ckBTC Index | `n5wcd-faaaa-aaaar-qaaea-cai` | | ckBTC Checker | `oltsj-fqaaa-aaaar-qal5q-cai` | ### Bitcoin Testnet4 | Canister | ID | |---|---| | ckBTC Ledger | `mc6ru-gyaaa-aaaar-qaaaq-cai` | | ckBTC Minter | `ml52i-qqaaa-aaaar-qaaba-cai` | | ckBTC Index | `mm444-5iaaa-aaaar-qaabq-cai` | ## How It Works ### Deposit Flow (BTC -> ckBTC) 1. Call `get_btc_address` on the minter with the user's principal + subaccount. This returns a unique Bitcoin address controlled by the minter. 2. User sends BTC to that address using any Bitcoin wallet. 3. Wait for Bitcoin confirmations (the minter requires confirmations before minting). 4. Call `update_balance` on the minter with the same principal + subaccount. The minter checks for new UTXOs and mints equivalent ckBTC to the user's ICRC-1 account. ### Transfer Flow (ckBTC -> ckBTC) Call `icrc1_transfer` on the ckBTC ledger. Fee is 10 satoshis. Settles in 1-2 seconds. ### Withdrawal Flow (ckBTC -> BTC) 1. Call `icrc2_approve` on the ckBTC ledger to grant the minter canister an allowance to spend from your account. 2. Call `retrieve_btc_with_approval` on the minter with `{ address, amount, from_subaccount: null }`. 3. The minter uses the approval to burn the ckBTC and submits a Bitcoin transaction. 4. The BTC arrives at the destination address after Bitcoin confirmations. ### Subaccount Generation Each user gets a unique deposit address derived from their principal + an optional 32-byte subaccount. To give each user a distinct deposit address within your canister, derive subaccounts from a user-specific identifier (their principal or a sequential ID). ## Mistakes That Break Your Build 1. **Using the wrong minter canister ID.** The minter ID is `mqygn-kiaaa-aaaar-qaadq-cai`. Do not confuse it with the ledger (`mxzaz-...`) or index (`n5wcd-...`). 2. **Forgetting the 10 satoshi transfer fee.** Every `icrc1_transfer` deducts 10 satoshis beyond the amount. If the user has exactly 1000 satoshis and you transfer 1000, it fails with `InsufficientFunds`. Transfer `balance - 10` instead. 3. **Not calling `update_balance` after a BTC deposit.** Sending BTC to the deposit address does nothing until you call `update_balance`. The minter does not auto-detect deposits. Your app must call this. 4. **Using Account Identifier instead of ICRC-1 Account.** ckBTC uses the ICRC-1 standard: `{ owner: Principal, subaccount: ?Blob }`. Do NOT use the legacy `AccountIdentifier` (hex string) from the ICP ledger. 5. **Subaccount must be exactly 32 bytes or null.** Passing a subaccount shorter or longer than 32 bytes causes a trap. Pad with leading zeros if deriving from a shorter value. 6. **Calling `retrieve_btc` with amount below the minimum.** The minter has a minimum withdrawal amount (currently 50,000 satoshis / 0.0005 BTC). Below this, you get `AmountTooLow`. 7. **Not checking the `retrieve_btc` response for errors.** The response is a variant: `Ok` contains `{ block_index }`, `Err` contains specific errors like `MalformedAddress`, `InsufficientFunds`, `TemporarilyUnavailable`. Always match both arms. 8. **Forgetting `owner` in `get_btc_address` args.** If you omit `owner`, Candid sub-typing assigns null, and the minter returns the deposit address of the caller (the canister) instead of the user. ## Implementation ### Motoko #### mops.toml ```toml [package] name = "ckbtc-app" version = "0.1.0" [dependencies] core = "2.0.0" icrc2-types = "1.1.0" ``` #### icp.yaml (local development with ckBTC) For local testing, pull the ckBTC canisters: ```yaml defaults: build: packtool: mops sources canisters: backend: type: motoko main: src/backend/main.mo dependencies: [] networks: local: bind: 127.0.0.1:4943 ``` For mainnet, your canister calls the ckBTC ledger and minter directly by principal. #### src/backend/main.mo ```motoko import Principal "mo:core/Principal"; import Blob "mo:core/Blob"; import Nat "mo:core/Nat"; import Nat8 "mo:core/Nat8"; import Nat64 "mo:core/Nat64"; import Array "mo:core/Array"; import Result "mo:core/Result"; import Error "mo:core/Error"; import Runtime "mo:core/Runtime"; persistent actor Self { // -- Types -- type Account = { owner : Principal; subaccount : ?Blob; }; type TransferArgs = { from_subaccount : ?Blob; to : Account; amount : Nat; fee : ?Nat; memo : ?Blob; created_at_time : ?Nat64; }; type TransferResult = { #Ok : Nat; // block index #Err : TransferError; }; type TransferError = { #BadFee : { expected_fee : Nat }; #BadBurn : { min_burn_amount : Nat }; #InsufficientFunds : { balance : Nat }; #TooOld; #CreatedInFuture : { ledger_time : Nat64 }; #Duplicate : { duplicate_of : Nat }; #TemporarilyUnavailable; #GenericError : { error_code : Nat; message : Text }; }; type UpdateBalanceResult = { #Ok : [UtxoStatus]; #Err : UpdateBalanceError; }; type UtxoStatus = { #ValueTooSmall : Utxo; #Tainted : Utxo; #Checked : Utxo; #Minted : { block_index : Nat64; minted_amount : Nat64; utxo : Utxo }; }; type Utxo = { outpoint : { txid : Blob; vout : Nat32 }; value : Nat64; height : Nat32; }; type UpdateBalanceError = { #NoNewUtxos : { required_confirmations : Nat32; pending_utxos : ?[PendingUtxo]; current_confirmations : ?Nat32; }; #AlreadyProcessing; #TemporarilyUnavailable : Text; #GenericError : { error_code : Nat64; error_message : Text }; }; type PendingUtxo = { outpoint : { txid : Blob; vout : Nat32 }; value : Nat64; confirmations : Nat32; }; type ApproveArgs = { from_subaccount : ?Blob; spender : Account; amount : Nat; expected_allowance : ?Nat; expires_at : ?Nat64; fee : ?Nat; memo : ?Blob; created_at_time : ?Nat64; }; type ApproveError = { #BadFee : { expected_fee : Nat }; #InsufficientFunds : { balance : Nat }; #AllowanceChanged : { current_allowance : Nat }; #Expired : { ledger_time : Nat64 }; #TooOld; #CreatedInFuture : { ledger_time : Nat64 }; #Duplicate : { duplicate_of : Nat }; #TemporarilyUnavailable; #GenericError : { error_code : Nat; message : Text }; }; type RetrieveBtcWithApprovalArgs = { address : Text; amount : Nat64; from_subaccount : ?Blob; }; type RetrieveBtcResult = { #Ok : { block_index : Nat64 }; #Err : RetrieveBtcError; }; type RetrieveBtcError = { #MalformedAddress : Text; #AlreadyProcessing; #AmountTooLow : Nat64; #InsufficientFunds : { balance : Nat64 }; #InsufficientAllowance : { allowance : Nat64 }; #TemporarilyUnavailable : Text; #GenericError : { error_code : Nat64; error_message : Text }; }; // -- Remote canister references (mainnet) -- transient let ckbtcLedger : actor { icrc1_transfer : shared (TransferArgs) -> async TransferResult; icrc1_balance_of : shared query (Account) -> async Nat; icrc1_fee : shared query () -> async Nat; icrc2_approve : shared (ApproveArgs) -> async { #Ok : Nat; #Err : ApproveError }; } = actor "mxzaz-hqaaa-aaaar-qaada-cai"; transient let ckbtcMinter : actor { get_btc_address : shared ({ owner : ?Principal; subaccount : ?Blob; }) -> async Text; update_balance : shared ({ owner : ?Principal; subaccount : ?Blob; }) -> async UpdateBalanceResult; retrieve_btc_with_approval : shared (RetrieveBtcWithApprovalArgs) -> async RetrieveBtcResult; } = actor "mqygn-kiaaa-aaaar-qaadq-cai"; // -- Subaccount derivation -- // Derive a 32-byte subaccount from a principal for per-user deposit addresses. func principalToSubaccount(p : Principal) : Blob { let bytes = Blob.toArray(Principal.toBlob(p)); let size = bytes.size(); // First byte is length, remaining padded to 32 bytes let sub = Array.tabulate(32, func(i : Nat) : Nat8 { if (i == 0) { Nat8.fromNat(size) } else if (i <= size) { bytes[i - 1] } else { 0 } }); Blob.fromArray(sub) }; // -- Deposit: Get user's BTC deposit address -- public shared ({ caller }) func getDepositAddress() : async Text { if (Principal.isAnonymous(caller)) { Runtime.trap("Authentication required") }; let subaccount = principalToSubaccount(caller); await ckbtcMinter.get_btc_address({ owner = ?Principal.fromActor(Self); subaccount = ?subaccount; }) }; // -- Deposit: Check for new BTC and mint ckBTC -- public shared ({ caller }) func updateBalance() : async UpdateBalanceResult { if (Principal.isAnonymous(caller)) { Runtime.trap("Authentication required") }; let subaccount = principalToSubaccount(caller); await ckbtcMinter.update_balance({ owner = ?Principal.fromActor(Self); subaccount = ?subaccount; }) }; // -- Check user's ckBTC balance -- public shared ({ caller }) func getBalance() : async Nat { if (Principal.isAnonymous(caller)) { Runtime.trap("Authentication required") }; let subaccount = principalToSubaccount(caller); await ckbtcLedger.icrc1_balance_of({ owner = Principal.fromActor(Self); subaccount = ?subaccount; }) }; // -- Transfer ckBTC to another user -- public shared ({ caller }) func transfer(to : Principal, amount : Nat) : async TransferResult { if (Principal.isAnonymous(caller)) { Runtime.trap("Authentication required") }; let fromSubaccount = principalToSubaccount(caller); await ckbtcLedger.icrc1_transfer({ from_subaccount = ?fromSubaccount; to = { owner = to; subaccount = null }; amount = amount; fee = ?10; // 10 satoshis memo = null; created_at_time = null; }) }; // -- Withdraw: Convert ckBTC back to BTC -- public shared ({ caller }) func withdraw(btcAddress : Text, amount : Nat64) : async RetrieveBtcResult { if (Principal.isAnonymous(caller)) { Runtime.trap("Authentication required") }; // Step 1: Approve the minter to spend ckBTC from the user's subaccount let fromSubaccount = principalToSubaccount(caller); let approveResult = await ckbtcLedger.icrc2_approve({ from_subaccount = ?fromSubaccount; spender = { owner = Principal.fromText("mqygn-kiaaa-aaaar-qaadq-cai"); subaccount = null; }; amount = Nat64.toNat(amount) + 10; // amount + fee for the minter's burn expected_allowance = null; expires_at = null; fee = ?10; memo = null; created_at_time = null; }); switch (approveResult) { case (#Err(e)) { return #Err(#GenericError({ error_code = 0; error_message = "Approve for minter failed" })) }; case (#Ok(_)) {}; }; // Step 2: Call retrieve_btc_with_approval on the minter await ckbtcMinter.retrieve_btc_with_approval({ address = btcAddress; amount = amount; from_subaccount = ?fromSubaccount; }) }; }; ``` ### Rust #### Cargo.toml ```toml [package] name = "ckbtc_backend" version = "0.1.0" edition = "2021" [lib] crate-type = ["cdylib"] [dependencies] ic-cdk = "0.19" ic-cdk-timers = "1.0" candid = "0.10" serde = { version = "1", features = ["derive"] } serde_bytes = "0.11" icrc-ledger-types = "0.1" ``` #### src/lib.rs ```rust use candid::{CandidType, Deserialize, Nat, Principal}; use ic_cdk::update; use ic_cdk::call::Call; use icrc_ledger_types::icrc1::account::Account; use icrc_ledger_types::icrc1::transfer::{TransferArg, TransferError}; use icrc_ledger_types::icrc2::approve::{ApproveArgs, ApproveError}; // -- Canister IDs -- const CKBTC_LEDGER: &str = "mxzaz-hqaaa-aaaar-qaada-cai"; const CKBTC_MINTER: &str = "mqygn-kiaaa-aaaar-qaadq-cai"; // -- Minter types -- #[derive(CandidType, Deserialize, Debug)] struct GetBtcAddressArgs { owner: Option, subaccount: Option>, } #[derive(CandidType, Deserialize, Debug)] struct UpdateBalanceArgs { owner: Option, subaccount: Option>, } #[derive(CandidType, Deserialize, Debug)] struct RetrieveBtcWithApprovalArgs { address: String, amount: u64, from_subaccount: Option>, } #[derive(CandidType, Deserialize, Debug)] struct RetrieveBtcOk { block_index: u64, } #[derive(CandidType, Deserialize, Debug)] enum RetrieveBtcError { MalformedAddress(String), AlreadyProcessing, AmountTooLow(u64), InsufficientFunds { balance: u64 }, InsufficientAllowance { allowance: u64 }, TemporarilyUnavailable(String), GenericError { error_code: u64, error_message: String }, } #[derive(CandidType, Deserialize, Debug)] struct Utxo { outpoint: OutPoint, value: u64, height: u32, } #[derive(CandidType, Deserialize, Debug)] struct OutPoint { txid: Vec, vout: u32, } #[derive(CandidType, Deserialize, Debug)] struct PendingUtxo { outpoint: OutPoint, value: u64, confirmations: u32, } #[derive(CandidType, Deserialize, Debug)] enum UtxoStatus { ValueTooSmall(Utxo), Tainted(Utxo), Checked(Utxo), Minted { block_index: u64, minted_amount: u64, utxo: Utxo, }, } #[derive(CandidType, Deserialize, Debug)] enum UpdateBalanceError { NoNewUtxos { required_confirmations: u32, pending_utxos: Option>, current_confirmations: Option, }, AlreadyProcessing, TemporarilyUnavailable(String), GenericError { error_code: u64, error_message: String }, } type UpdateBalanceResult = Result, UpdateBalanceError>; type RetrieveBtcResult = Result; // -- Subaccount derivation -- // Derive a 32-byte subaccount from a principal for per-user deposit addresses. fn principal_to_subaccount(principal: &Principal) -> [u8; 32] { let mut subaccount = [0u8; 32]; let principal_bytes = principal.as_slice(); subaccount[0] = principal_bytes.len() as u8; subaccount[1..1 + principal_bytes.len()].copy_from_slice(principal_bytes); subaccount } fn ledger_id() -> Principal { Principal::from_text(CKBTC_LEDGER).unwrap() } fn minter_id() -> Principal { Principal::from_text(CKBTC_MINTER).unwrap() } // -- Deposit: Get user's BTC deposit address -- #[update] async fn get_deposit_address() -> String { let caller = ic_cdk::api::msg_caller(); assert_ne!(caller, Principal::anonymous(), "Authentication required"); let subaccount = principal_to_subaccount(&caller); let args = GetBtcAddressArgs { owner: Some(ic_cdk::api::canister_self()), subaccount: Some(subaccount.to_vec()), }; let (address,): (String,) = Call::unbounded_wait(minter_id(), "get_btc_address") .with_arg(args) .await .expect("Failed to get BTC address") .candid_tuple() .expect("Failed to decode response"); address } // -- Deposit: Check for new BTC and mint ckBTC -- #[update] async fn update_balance() -> UpdateBalanceResult { let caller = ic_cdk::api::msg_caller(); assert_ne!(caller, Principal::anonymous(), "Authentication required"); let subaccount = principal_to_subaccount(&caller); let args = UpdateBalanceArgs { owner: Some(ic_cdk::api::canister_self()), subaccount: Some(subaccount.to_vec()), }; let (result,): (UpdateBalanceResult,) = Call::unbounded_wait(minter_id(), "update_balance") .with_arg(args) .await .expect("Failed to call update_balance") .candid_tuple() .expect("Failed to decode response"); result } // -- Check user's ckBTC balance -- #[update] async fn get_balance() -> Nat { let caller = ic_cdk::api::msg_caller(); assert_ne!(caller, Principal::anonymous(), "Authentication required"); let subaccount = principal_to_subaccount(&caller); let account = Account { owner: ic_cdk::api::canister_self(), subaccount: Some(subaccount), }; let (balance,): (Nat,) = Call::unbounded_wait(ledger_id(), "icrc1_balance_of") .with_arg(account) .await .expect("Failed to get balance") .candid_tuple() .expect("Failed to decode response"); balance } // -- Transfer ckBTC to another user -- #[update] async fn transfer(to: Principal, amount: Nat) -> Result { let caller = ic_cdk::api::msg_caller(); assert_ne!(caller, Principal::anonymous(), "Authentication required"); let from_subaccount = principal_to_subaccount(&caller); let args = TransferArg { from_subaccount: Some(from_subaccount), to: Account { owner: to, subaccount: None, }, amount, fee: Some(Nat::from(10u64)), // 10 satoshis memo: None, created_at_time: None, }; let (result,): (Result,) = Call::unbounded_wait(ledger_id(), "icrc1_transfer") .with_arg(args) .await .expect("Failed to call icrc1_transfer") .candid_tuple() .expect("Failed to decode response"); result } // -- Withdraw: Convert ckBTC back to BTC -- #[update] async fn withdraw(btc_address: String, amount: u64) -> RetrieveBtcResult { let caller = ic_cdk::api::msg_caller(); assert_ne!(caller, Principal::anonymous(), "Authentication required"); // Step 1: Approve the minter to spend ckBTC from the user's subaccount let from_subaccount = principal_to_subaccount(&caller); let approve_args = ApproveArgs { from_subaccount: Some(from_subaccount), spender: Account { owner: minter_id(), subaccount: None, }, amount: Nat::from(amount) + Nat::from(10u64), // amount + fee for the minter's burn expected_allowance: None, expires_at: None, fee: Some(Nat::from(10u64)), memo: None, created_at_time: None, }; let (approve_result,): (Result,) = Call::unbounded_wait(ledger_id(), "icrc2_approve") .with_arg(approve_args) .await .expect("Failed to call icrc2_approve") .candid_tuple() .expect("Failed to decode response"); if let Err(e) = approve_result { return Err(RetrieveBtcError::GenericError { error_code: 0, error_message: format!("Approve for minter failed: {:?}", e), }); } // Step 2: Call retrieve_btc_with_approval on the minter let args = RetrieveBtcWithApprovalArgs { address: btc_address, amount, from_subaccount: Some(from_subaccount.to_vec()), }; let (result,): (RetrieveBtcResult,) = Call::unbounded_wait(minter_id(), "retrieve_btc_with_approval") .with_arg(args) .await .expect("Failed to call retrieve_btc_with_approval") .candid_tuple() .expect("Failed to decode response"); result } // -- Export Candid interface -- ic_cdk::export_candid!(); ``` ## Deploy & Test ### Local Development There is no local ckBTC minter. For local testing, mock the minter interface or test against mainnet/testnet. ### Deploy to Mainnet ```bash # Deploy your backend canister icp deploy backend -e ic # Your canister calls the mainnet ckBTC canisters directly by principal ``` ### Using icp to Interact with ckBTC Directly ```bash # Check ckBTC balance for an account icp canister call mxzaz-hqaaa-aaaar-qaada-cai icrc1_balance_of \ '(record { owner = principal "YOUR-PRINCIPAL"; subaccount = null })' \ -e ic # Get deposit address icp canister call mqygn-kiaaa-aaaar-qaadq-cai get_btc_address \ '(record { owner = opt principal "YOUR-PRINCIPAL"; subaccount = null })' \ -e ic # Check for new deposits and mint ckBTC icp canister call mqygn-kiaaa-aaaar-qaadq-cai update_balance \ '(record { owner = opt principal "YOUR-PRINCIPAL"; subaccount = null })' \ -e ic # Transfer ckBTC (amount in e8s — 1 ckBTC = 100_000_000) icp canister call mxzaz-hqaaa-aaaar-qaada-cai icrc1_transfer \ '(record { to = record { owner = principal "RECIPIENT-PRINCIPAL"; subaccount = null }; amount = 100_000; fee = opt 10; memo = null; from_subaccount = null; created_at_time = null; })' -e ic # Withdraw ckBTC to a BTC address (amount in satoshis, minimum 50_000) # Note: In production, use icrc2_approve + retrieve_btc_with_approval (see withdraw function above) icp canister call mqygn-kiaaa-aaaar-qaadq-cai retrieve_btc_with_approval \ '(record { address = "bc1q...your-btc-address"; amount = 50_000; from_subaccount = null })' \ -e ic # Check transfer fee icp canister call mxzaz-hqaaa-aaaar-qaada-cai icrc1_fee '()' -e ic ``` ## Verify It Works ### Check Balance ```bash icp canister call mxzaz-hqaaa-aaaar-qaada-cai icrc1_balance_of \ '(record { owner = principal "YOUR-PRINCIPAL"; subaccount = null })' \ -e ic # Expected: (AMOUNT : nat) — balance in satoshis (e8s) ``` ### Verify Transfer ```bash # Transfer 1000 satoshis icp canister call mxzaz-hqaaa-aaaar-qaada-cai icrc1_transfer \ '(record { to = record { owner = principal "RECIPIENT"; subaccount = null }; amount = 1_000; fee = opt 10; memo = null; from_subaccount = null; created_at_time = null; })' -e ic # Expected: (variant { Ok = BLOCK_INDEX : nat }) # Verify recipient received it icp canister call mxzaz-hqaaa-aaaar-qaada-cai icrc1_balance_of \ '(record { owner = principal "RECIPIENT"; subaccount = null })' \ -e ic # Expected: balance increased by 1000 ``` ### Verify Deposit Flow ```bash # 1. Get deposit address icp canister call YOUR-CANISTER getDepositAddress -e ic # Expected: "bc1q..." or "3..." — a valid Bitcoin address # 2. Send BTC to that address (external wallet) # 3. Check for new deposits icp canister call YOUR-CANISTER updateBalance -e ic # Expected: (variant { Ok = vec { variant { Minted = record { ... } } } }) # 4. Check ckBTC balance icp canister call YOUR-CANISTER getBalance -e ic # Expected: balance reflects minted ckBTC ``` ### Verify Withdrawal ```bash icp canister call YOUR-CANISTER withdraw '("bc1q...destination", 50_000 : nat64)' -e ic # Expected: (variant { Ok = record { block_index = BLOCK_INDEX : nat64 } }) # The BTC will arrive at the destination address after Bitcoin confirmations ``` --- --- name: evm-rpc title: EVM RPC Integration category: Integration description: "Call Ethereum and EVM chains from IC canisters. JSON-RPC, transaction signing, and cross-chain workflows." endpoints: 9 version: 1.1.2 status: stable dependencies: [https-outcalls] requires: [icp-cli >= 0.1.0, mops, ic-cdk >= 0.19] tags: [ethereum, evm, json-rpc, cross-chain, eth, arbitrum, base, optimism] --- # EVM RPC Canister — Calling Ethereum from IC ## What This Is The EVM RPC canister is an IC system canister that proxies JSON-RPC calls to Ethereum and EVM-compatible chains via HTTPS outcalls. Your canister sends a request to the EVM RPC canister, which fans it out to multiple RPC providers, compares responses for consensus, and returns the result. No API keys required for default providers. No bridges or oracles needed. ## Prerequisites - `icp-cli` >= 0.1.0 (`brew install dfinity/tap/icp-cli`) - For Motoko: `mops` package manager, `core = "2.0.0"` in mops.toml - For Rust: `ic-cdk`, `candid`, `serde` - Cycles in your canister (each RPC call costs cycles) ## Canister IDs | Canister | ID | Subnet | |---|---|---| | EVM RPC (mainnet) | `7hfb6-caaaa-aaaar-qadga-cai` | 34-node fiduciary | ## Supported Chains | Chain | RpcServices Variant | Chain ID | |---|---|---| | Ethereum Mainnet | `#EthMainnet` | 1 | | Ethereum Sepolia | `#EthSepolia` | 11155111 | | Arbitrum One | `#ArbitrumOne` | 42161 | | Base Mainnet | `#BaseMainnet` | 8453 | | Optimism Mainnet | `#OptimismMainnet` | 10 | | Custom EVM chain | `#Custom` | any | ## RPC Providers Built-in providers (no API key needed for defaults): | Provider | Ethereum | Sepolia | Arbitrum | Base | Optimism | |---|---|---|---|---|---| | Alchemy | yes | yes | yes | yes | yes | | Ankr | yes | - | yes | yes | yes | | BlockPi | yes | yes | yes | yes | yes | | Cloudflare | yes | - | - | - | - | | LlamaNodes | yes | - | yes | yes | yes | | PublicNode | yes | yes | yes | yes | yes | ## Cycle Costs **Formula:** ``` (5_912_000 + 60_000 * nodes + 2400 * request_bytes + 800 * max_response_bytes) * nodes * rpc_count ``` Where `nodes` = 34 (fiduciary subnet), `rpc_count` = number of providers queried. **Practical guidance:** Send 10_000_000_000 cycles (10B) as a starting budget. Unused cycles are refunded. Typical calls cost 100M-1B cycles (~$0.0001-$0.001 USD). Use `requestCost` to get an exact estimate before calling. ## Mistakes That Break Your Build 1. **Not sending enough cycles.** Every EVM RPC call requires cycles attached. If you send too few, the call fails silently or traps. Start with 10B cycles and adjust down after verifying. 2. **Ignoring the `Inconsistent` result variant.** Multi-provider calls return `#Consistent(result)` or `#Inconsistent(results)`. If providers disagree, you get `Inconsistent`. Always handle both arms or your canister traps on provider disagreement. 3. **Using wrong chain variant.** `#EthMainnet` is for Ethereum L1. For Arbitrum use `#ArbitrumOne`, for Base use `#BaseMainnet`. Using the wrong variant queries the wrong chain. 4. **Forgetting `null` for optional config.** The second argument to every RPC method is an optional config record. Pass `null` for defaults. Omitting it causes a Candid type mismatch. 5. **Response size limits.** Large responses (e.g., `eth_getLogs` with broad filters) can exceed the max response size. Set `max_response_bytes` appropriately or the call fails. 6. **Calling `eth_sendRawTransaction` without signing first.** The EVM RPC canister does not sign transactions. You must sign the transaction yourself (using threshold ECDSA via the IC management canister) and pass the raw signed bytes. 7. **Using `Cycles.add` instead of `await (with cycles = ...)` in mo:core.** In mo:core 2.0, `Cycles.add` does not exist. Attach cycles using `await (with cycles = AMOUNT) canister.method(args)`. This is the only way to attach cycles in mo:core. ## Implementation ### icp.yaml Configuration #### Option A: Pull from mainnet (recommended for production) ```yaml canisters: evm_rpc: type: pull id: 7hfb6-caaaa-aaaar-qadga-cai backend: type: motoko main: src/backend/main.mo dependencies: - evm_rpc ``` Then run: ```bash icp deps pull icp deps init evm_rpc --argument '(record {})' icp deps deploy ``` #### Option B: Custom wasm (for local development) ```yaml canisters: evm_rpc: type: custom candid: https://github.com/internet-computer-protocol/evm-rpc-canister/releases/latest/download/evm_rpc.did wasm: https://github.com/internet-computer-protocol/evm-rpc-canister/releases/latest/download/evm_rpc.wasm.gz remote: id: ic: 7hfb6-caaaa-aaaar-qadga-cai backend: type: motoko main: src/backend/main.mo dependencies: - evm_rpc ``` ### Motoko #### mops.toml ```toml [package] name = "evm-rpc-app" version = "0.1.0" [dependencies] core = "2.0.0" ``` #### src/backend/main.mo — Get ETH Balance ```motoko import EvmRpc "canister:evm_rpc"; import Runtime "mo:core/Runtime"; import Text "mo:core/Text"; persistent actor { // Get ETH balance for an address on Ethereum mainnet public func getEthBalance(address : Text) : async Text { let services = #EthMainnet(null); // Use all default providers let config = null; // eth_call with balance check via raw JSON-RPC let json = "{\"jsonrpc\":\"2.0\",\"method\":\"eth_getBalance\",\"params\":[\"" # address # "\",\"latest\"],\"id\":1}"; let maxResponseBytes : Nat64 = 1000; // Get exact cost first let cyclesResult = await EvmRpc.requestCost(#EthMainnet(#PublicNode), json, maxResponseBytes); let cost = switch (cyclesResult) { case (#Ok(c)) { c }; case (#Err(err)) { Runtime.trap("requestCost failed: " # debug_show err) }; }; let result = await (with cycles = cost) EvmRpc.request( #EthMainnet(#PublicNode), json, maxResponseBytes ); switch (result) { case (#Ok(response)) { response }; case (#Err(err)) { Runtime.trap("RPC error: " # debug_show err) }; } }; // Get latest block using the typed API public func getLatestBlock() : async ?EvmRpc.Block { let services = #EthMainnet(null); let config = null; let result = await (with cycles = 10_000_000_000) EvmRpc.eth_getBlockByNumber( services, config, #Latest ); switch (result) { case (#Consistent(#Ok(block))) { ?block }; case (#Consistent(#Err(error))) { Runtime.trap("Error: " # debug_show error); }; case (#Inconsistent(_results)) { Runtime.trap("Providers returned inconsistent results"); }; } }; // Read ERC-20 token balance (e.g., USDC on Ethereum) // Function selector for balanceOf(address): 0x70a08231 // Pad address to 32 bytes (remove 0x prefix, left-pad with zeros) public func getErc20Balance(tokenContract : Text, walletAddress : Text) : async ?Text { let services = #EthMainnet(null); let config = null; // Encode: balanceOf(address) = 0x70a08231 + address padded to 32 bytes // walletAddress should be like "0xABC..." — strip 0x and left-pad to 64 hex chars let calldata = "0x70a08231000000000000000000000000" # stripHexPrefix(walletAddress); let result = await (with cycles = 10_000_000_000) EvmRpc.eth_call( services, config, { block = null; transaction = { to = ?tokenContract; input = ?calldata; // All optional fields set to null accessList = null; blobVersionedHashes = null; blobs = null; chainId = null; from = null; gas = null; gasPrice = null; maxFeePerBlobGas = null; maxFeePerGas = null; maxPriorityFeePerGas = null; nonce = null; type_ = null; value = null; }; } ); switch (result) { case (#Consistent(#Ok(response))) { ?response }; case (#Consistent(#Err(error))) { Runtime.trap("eth_call error: " # debug_show error); }; case (#Inconsistent(_)) { Runtime.trap("Inconsistent results from providers"); }; } }; // Helper: strip "0x" prefix from hex string func stripHexPrefix(hex : Text) : Text { let chars = hex.chars(); switch (chars.next(), chars.next()) { case (?"0", ?"x") { var rest = ""; for (c in chars) { rest #= Text.fromChar(c) }; rest }; case _ { hex }; } }; // Send a signed raw transaction public func sendRawTransaction(signedTxHex : Text) : async ?EvmRpc.SendRawTransactionStatus { let services = #EthMainnet(null); let config = null; let result = await (with cycles = 10_000_000_000) EvmRpc.eth_sendRawTransaction( services, config, signedTxHex ); switch (result) { case (#Consistent(#Ok(status))) { ?status }; case (#Consistent(#Err(error))) { Runtime.trap("sendRawTransaction error: " # debug_show error); }; case (#Inconsistent(_)) { Runtime.trap("Inconsistent results"); }; } }; // Get transaction receipt public func getTransactionReceipt(txHash : Text) : async ?EvmRpc.TransactionReceipt { let services = #EthMainnet(null); let config = null; let result = await (with cycles = 10_000_000_000) EvmRpc.eth_getTransactionReceipt( services, config, txHash ); switch (result) { case (#Consistent(#Ok(receipt))) { receipt }; case (#Consistent(#Err(error))) { Runtime.trap("Error: " # debug_show error); }; case (#Inconsistent(_)) { Runtime.trap("Inconsistent results"); }; } }; // Using a specific provider (instead of multi-provider consensus) public func getBalanceViaPublicNode(address : Text) : async Text { let json = "{\"jsonrpc\":\"2.0\",\"method\":\"eth_getBalance\",\"params\":[\"" # address # "\",\"latest\"],\"id\":1}"; let maxResponseBytes : Nat64 = 1000; let result = await (with cycles = 10_000_000_000) EvmRpc.request( #EthMainnet(#PublicNode), // Single specific provider json, maxResponseBytes ); switch (result) { case (#Ok(response)) { response }; case (#Err(err)) { Runtime.trap("Error: " # debug_show err) }; } }; // Querying a different chain (Arbitrum) public func getArbitrumBlock() : async ?EvmRpc.Block { let result = await (with cycles = 10_000_000_000) EvmRpc.eth_getBlockByNumber( #ArbitrumOne(null), // Arbitrum One null, #Latest ); switch (result) { case (#Consistent(#Ok(block))) { ?block }; case (#Consistent(#Err(error))) { Runtime.trap("Error: " # debug_show error); }; case (#Inconsistent(_)) { Runtime.trap("Inconsistent results"); }; } }; // Using a custom RPC endpoint public func getBalanceCustomRpc(address : Text, rpcUrl : Text) : async Text { let json = "{\"jsonrpc\":\"2.0\",\"method\":\"eth_getBalance\",\"params\":[\"" # address # "\",\"latest\"],\"id\":1}"; let result = await (with cycles = 10_000_000_000) EvmRpc.request( #Custom({ url = rpcUrl; headers = null }), json, 1000 ); switch (result) { case (#Ok(response)) { response }; case (#Err(err)) { Runtime.trap("Error: " # debug_show err) }; } }; }; ``` ### Rust #### Cargo.toml ```toml [package] name = "evm_rpc_backend" version = "0.1.0" edition = "2021" [lib] crate-type = ["cdylib"] [dependencies] ic-cdk = "0.19" candid = "0.10" serde = { version = "1", features = ["derive"] } serde_json = "1" ``` #### src/lib.rs ```rust use candid::{CandidType, Deserialize, Principal}; use ic_cdk::call::Call; use ic_cdk::update; const EVM_RPC_CANISTER: &str = "7hfb6-caaaa-aaaar-qadga-cai"; fn evm_rpc_id() -> Principal { Principal::from_text(EVM_RPC_CANISTER).unwrap() } // -- Types matching the EVM RPC canister Candid interface -- #[derive(CandidType, Deserialize, Clone, Debug)] enum RpcServices { EthMainnet(Option>), EthSepolia(Option>), ArbitrumOne(Option>), BaseMainnet(Option>), OptimismMainnet(Option>), Custom { #[serde(rename = "chainId")] chain_id: u64, services: Vec, }, } #[derive(CandidType, Deserialize, Clone, Debug)] enum RpcService { EthMainnet(EthMainnetService), EthSepolia(EthSepoliaService), ArbitrumOne(L2MainnetService), BaseMainnet(L2MainnetService), OptimismMainnet(L2MainnetService), Custom(CustomRpcService), Provider(u64), } #[derive(CandidType, Deserialize, Clone, Debug)] enum EthMainnetService { Alchemy, Ankr, BlockPi, Cloudflare, Llama, PublicNode, } #[derive(CandidType, Deserialize, Clone, Debug)] enum EthSepoliaService { Alchemy, Ankr, BlockPi, PublicNode, Sepolia, } #[derive(CandidType, Deserialize, Clone, Debug)] enum L2MainnetService { Alchemy, Ankr, BlockPi, Llama, PublicNode, } #[derive(CandidType, Deserialize, Clone, Debug)] struct HttpHeader { name: String, value: String, } #[derive(CandidType, Deserialize, Clone, Debug)] struct CustomRpcService { url: String, headers: Option>, } #[derive(CandidType, Deserialize, Clone, Debug)] enum BlockTag { Latest, Safe, Finalized, Earliest, Pending, Number(candid::Nat), } #[derive(CandidType, Deserialize, Clone, Debug)] enum MultiResult { Consistent(RpcResult), Inconsistent(Vec<(RpcService, RpcResult)>), } #[derive(CandidType, Deserialize, Clone, Debug)] enum RpcResult { Ok(T), Err(RpcError), } #[derive(CandidType, Deserialize, Clone, Debug)] enum RpcError { ProviderError(ProviderError), HttpOutcallError(HttpOutcallError), JsonRpcError(JsonRpcError), ValidationError(ValidationError), } #[derive(CandidType, Deserialize, Clone, Debug)] enum ProviderError { TooFewCycles { expected: candid::Nat, received: candid::Nat }, MissingRequiredProvider, ProviderNotFound, NoPermission, InvalidRpcConfig(String), } #[derive(CandidType, Deserialize, Clone, Debug)] enum RejectionCode { NoError, CanisterError, SysTransient, DestinationInvalid, Unknown, SysFatal, CanisterReject, } #[derive(CandidType, Deserialize, Clone, Debug)] enum HttpOutcallError { IcError { code: RejectionCode, message: String }, InvalidHttpJsonRpcResponse { status: u16, body: String, #[serde(rename = "parsingError")] parsing_error: Option, }, } #[derive(CandidType, Deserialize, Clone, Debug)] struct JsonRpcError { code: i64, message: String, } #[derive(CandidType, Deserialize, Clone, Debug)] enum ValidationError { Custom(String), InvalidHex(String), } #[derive(CandidType, Deserialize, Clone, Debug)] struct Block { #[serde(rename = "baseFeePerGas")] base_fee_per_gas: Option, number: candid::Nat, difficulty: Option, #[serde(rename = "extraData")] extra_data: String, #[serde(rename = "gasLimit")] gas_limit: candid::Nat, #[serde(rename = "gasUsed")] gas_used: candid::Nat, hash: String, #[serde(rename = "logsBloom")] logs_bloom: String, miner: String, #[serde(rename = "mixHash")] mix_hash: String, nonce: candid::Nat, #[serde(rename = "parentHash")] parent_hash: String, #[serde(rename = "receiptsRoot")] receipts_root: String, #[serde(rename = "sha3Uncles")] sha3_uncles: String, size: candid::Nat, #[serde(rename = "stateRoot")] state_root: String, timestamp: candid::Nat, #[serde(rename = "totalDifficulty")] total_difficulty: Option, transactions: Vec, #[serde(rename = "transactionsRoot")] transactions_root: Option, uncles: Vec, } #[derive(CandidType, Deserialize, Clone, Debug)] enum SendRawTransactionStatus { Ok(Option), NonceTooLow, NonceTooHigh, InsufficientFunds, } // -- Get ETH balance via raw JSON-RPC -- #[update] async fn get_eth_balance(address: String) -> String { let json = format!( r#"{{"jsonrpc":"2.0","method":"eth_getBalance","params":["{}","latest"],"id":1}}"#, address ); let max_response_bytes: u64 = 1000; let cycles: u128 = 10_000_000_000; let (result,): (Result,) = Call::unbounded_wait(evm_rpc_id(), "request") .with_args(&( RpcService::EthMainnet(EthMainnetService::PublicNode), json, max_response_bytes, )) .with_cycles(cycles) .await .expect("Failed to call EVM RPC canister") .candid_tuple() .expect("Failed to decode response"); match result { Ok(response) => response, Err(err) => ic_cdk::trap(&format!("RPC error: {:?}", err)), } } // -- Get latest block via typed API -- #[update] async fn get_latest_block() -> Block { let cycles: u128 = 10_000_000_000; let (result,): (MultiResult,) = Call::unbounded_wait(evm_rpc_id(), "eth_getBlockByNumber") .with_args(&( RpcServices::EthMainnet(None), None::<()>, // config BlockTag::Latest, )) .with_cycles(cycles) .await .expect("Failed to call eth_getBlockByNumber") .candid_tuple() .expect("Failed to decode response"); match result { MultiResult::Consistent(RpcResult::Ok(block)) => block, MultiResult::Consistent(RpcResult::Err(err)) => { ic_cdk::trap(&format!("RPC error: {:?}", err)) } MultiResult::Inconsistent(_) => { ic_cdk::trap("Providers returned inconsistent results") } } } // -- Read ERC-20 balance -- #[update] async fn get_erc20_balance(token_contract: String, wallet_address: String) -> String { // balanceOf(address) selector: 0x70a08231 // Pad the address to 32 bytes (strip 0x, left-pad with zeros) let addr = wallet_address.trim_start_matches("0x"); let calldata = format!("0x70a08231{:0>64}", addr); let json = format!( r#"{{"jsonrpc":"2.0","method":"eth_call","params":[{{"to":"{}","data":"{}"}},"latest"],"id":1}}"#, token_contract, calldata ); let cycles: u128 = 10_000_000_000; let (result,): (Result,) = Call::unbounded_wait(evm_rpc_id(), "request") .with_args(&( RpcService::EthMainnet(EthMainnetService::PublicNode), json, 2048_u64, )) .with_cycles(cycles) .await .expect("Failed to call EVM RPC canister") .candid_tuple() .expect("Failed to decode response"); match result { Ok(response) => response, Err(err) => ic_cdk::trap(&format!("RPC error: {:?}", err)), } } // -- Send signed raw transaction -- #[update] async fn send_raw_transaction(signed_tx_hex: String) -> SendRawTransactionStatus { let cycles: u128 = 10_000_000_000; let (result,): (MultiResult,) = Call::unbounded_wait(evm_rpc_id(), "eth_sendRawTransaction") .with_args(&( RpcServices::EthMainnet(None), None::<()>, signed_tx_hex, )) .with_cycles(cycles) .await .expect("Failed to call eth_sendRawTransaction") .candid_tuple() .expect("Failed to decode response"); match result { MultiResult::Consistent(RpcResult::Ok(status)) => status, MultiResult::Consistent(RpcResult::Err(err)) => { ic_cdk::trap(&format!("RPC error: {:?}", err)) } MultiResult::Inconsistent(_) => { ic_cdk::trap("Providers returned inconsistent results") } } } // -- Query Arbitrum (different chain example) -- #[update] async fn get_arbitrum_block() -> Block { let cycles: u128 = 10_000_000_000; let (result,): (MultiResult,) = Call::unbounded_wait(evm_rpc_id(), "eth_getBlockByNumber") .with_args(&( RpcServices::ArbitrumOne(None), None::<()>, BlockTag::Latest, )) .with_cycles(cycles) .await .expect("Failed to call eth_getBlockByNumber") .candid_tuple() .expect("Failed to decode response"); match result { MultiResult::Consistent(RpcResult::Ok(block)) => block, MultiResult::Consistent(RpcResult::Err(err)) => { ic_cdk::trap(&format!("RPC error: {:?}", err)) } MultiResult::Inconsistent(_) => { ic_cdk::trap("Inconsistent results") } } } ic_cdk::export_candid!(); ``` ## Deploy & Test ### Local Development ```bash # Start local replica icp network start -d # Pull the EVM RPC canister icp deps pull icp deps init evm_rpc --argument '(record {})' icp deps deploy # Deploy your backend icp deploy backend ``` ### Deploy to Mainnet ```bash # On mainnet, the EVM RPC canister is already deployed. # Your canister calls it directly by principal. icp deploy backend -e ic ``` ### Test via icp CLI ```bash # Set up variables export CYCLES=10000000000 # Get ETH balance (raw JSON-RPC via single provider) icp canister call evm_rpc request '( variant { EthMainnet = variant { PublicNode } }, "{\"jsonrpc\":\"2.0\",\"method\":\"eth_getBalance\",\"params\":[\"0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045\",\"latest\"],\"id\":1}", 1000 )' --with-cycles=$CYCLES # Get latest block (typed API, multi-provider) icp canister call evm_rpc eth_getBlockByNumber '( variant { EthMainnet = null }, null, variant { Latest } )' --with-cycles=$CYCLES # Get transaction receipt icp canister call evm_rpc eth_getTransactionReceipt '( variant { EthMainnet = null }, null, "0xdd5d4b18923d7aae953c7996d791118102e889bea37b48a651157a4890e4746f" )' --with-cycles=$CYCLES # Check available providers icp canister call evm_rpc getProviders # Estimate cost before calling icp canister call evm_rpc requestCost '( variant { EthMainnet = variant { PublicNode } }, "{\"jsonrpc\":\"2.0\",\"method\":\"eth_getBalance\",\"params\":[\"0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045\",\"latest\"],\"id\":1}", 1000 )' ``` ## Verify It Works ### Check ETH Balance ```bash icp canister call backend get_eth_balance '("0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045")' # Expected: JSON string like '{"jsonrpc":"2.0","id":1,"result":"0x..."}' # The result is the balance in wei (hex encoded) ``` ### Check Latest Block ```bash icp canister call backend get_latest_block # Expected: record { number = ...; hash = "0x..."; timestamp = ...; ... } ``` ### Check ERC-20 Balance (USDC) ```bash # USDC contract on Ethereum: 0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48 icp canister call backend get_erc20_balance '( "0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48", "0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045" )' # Expected: JSON with hex-encoded uint256 balance ``` ### Verify Cycle Refunds Check your canister cycle balance before and after an RPC call: ```bash # Before icp canister status backend -e ic # Make a call icp canister call backend get_eth_balance '("0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045")' -e ic # After — unused cycles from the 10B budget are refunded icp canister status backend -e ic ``` --- --- name: https-outcalls title: HTTPS Outcalls category: Integration description: "Make HTTP requests from canisters to external APIs. Consensus-safe request patterns, transform functions, and cost management." endpoints: 4 version: 1.5.3 status: stable dependencies: [] requires: [icp-cli >= 0.1.0] tags: [http, api, fetch, external, request, transform, outcall] --- # HTTPS Outcalls ## What This Is HTTPS outcalls allow canisters to make HTTP requests to external web services directly from on-chain code. Because the Internet Computer runs on a replicated subnet (multiple nodes execute the same code), all nodes must agree on the response. A transform function strips non-deterministic fields (timestamps, request IDs, ordering) so that every replica sees an identical response and can reach consensus. ## Prerequisites - icp-cli >= 0.1.0 (`brew install dfinity/tap/icp-cli`) - For Motoko: `moc` compiler (included with icp-cli), `mo:core` 2.0 in mops.toml - For Rust: `ic-cdk >= 0.19`, `serde_json` for JSON parsing ## Canister IDs HTTPS outcalls use the IC management canister: | Name | Canister ID | Used For | |------|-------------|----------| | Management canister | `aaaaa-aa` | The `http_request` management call target | You do not deploy anything extra. The management canister is built into every subnet. ## Mistakes That Break Your Build 1. **Forgetting the transform function.** Without a transform, the raw HTTP response often differs between replicas (different headers, different ordering in JSON fields, timestamps). Consensus fails and the call is rejected. ALWAYS provide a transform function. 2. **Not attaching cycles to the call.** HTTPS outcalls are not free. The calling canister must attach cycles to cover the cost. If you attach zero cycles, the call fails immediately. Cost is approximately 49_140_000 + 5_200 * response_bytes + 10_400 * request_bytes cycles. A safe default for most API calls is 200_000_000 (200M) cycles. 3. **Using HTTP instead of HTTPS.** The IC only supports HTTPS outcalls. Plain HTTP URLs are rejected. The target server must have a valid TLS certificate. 4. **Exceeding the 2MB response limit.** The maximum response body is 2MB (2_097_152 bytes). If the external API returns more, the call fails. Use the `max_response_bytes` field to set a limit and design your queries to return small responses. 5. **Non-idempotent POST requests without caution.** Because multiple replicas make the same request, a POST endpoint that is not idempotent (e.g., "create order") will be called N times (once per replica, typically 13 on a 13-node subnet). Use idempotency keys or design endpoints to handle duplicate requests. 6. **Not handling outcall failures.** External servers can be down, slow, or return errors. Always handle the error case. On the IC, if the external server does not respond within the timeout (~30 seconds), the call traps. 7. **Calling localhost or private IPs.** HTTPS outcalls can only reach public internet endpoints. Localhost, 10.x.x.x, 192.168.x.x, and other private ranges are blocked. 8. **Forgetting the `Host` header.** Some API endpoints require the `Host` header to be explicitly set. The IC does not automatically set this from the URL. ## Implementation ### Motoko ```motoko import Blob "mo:core/Blob"; import Nat64 "mo:core/Nat64"; import Text "mo:core/Text"; import Runtime "mo:core/Runtime"; persistent actor { // Type definitions for the management canister HTTP interface type HttpRequestArgs = { url : Text; max_response_bytes : ?Nat64; headers : [HttpHeader]; body : ?[Nat8]; method : HttpMethod; transform : ?TransformRawResponseFunction; }; type HttpHeader = { name : Text; value : Text; }; type HttpMethod = { #get; #post; #head; }; type HttpResponsePayload = { status : Nat; headers : [HttpHeader]; body : [Nat8]; }; type TransformRawResponseFunction = { function : shared query TransformArgs -> async HttpResponsePayload; context : Blob; }; type TransformArgs = { response : HttpResponsePayload; context : Blob; }; // The management canister for making outcalls transient let ic : actor { http_request : HttpRequestArgs -> async HttpResponsePayload; } = actor "aaaaa-aa"; // Transform function: strips headers and keeps only the body. // This ensures all replicas see the same response for consensus. // MUST be a `shared query` function. public query func transform(args : TransformArgs) : async HttpResponsePayload { { status = args.response.status; body = args.response.body; headers = []; // Strip headers -- they often contain non-deterministic values }; }; // GET request: fetch a JSON API public func fetchPrice() : async Text { let url = "https://api.coingecko.com/api/v3/simple/price?ids=internet-computer&vs_currencies=usd"; let request : HttpRequestArgs = { url = url; max_response_bytes = ?Nat64.fromNat(10_000); // Limit response size headers = [ { name = "User-Agent"; value = "ic-canister" }, ]; body = null; method = #get; transform = ?{ function = transform; context = "" : Blob; }; }; // Attach cycles for the outcall (200M is safe for most requests) // In mo:core, use `await (with cycles = N)` instead of the old Cycles.add(N) let response = await (with cycles = 200_000_000) ic.http_request(request); // Decode the response body let bodyBlob = Blob.fromArray(response.body); let body = Text.decodeUtf8(bodyBlob); switch (body) { case (?text) { text }; case (null) { Runtime.trap("Response is not valid UTF-8") }; }; }; // POST request: send JSON data public func postData(jsonPayload : Text) : async Text { let url = "https://httpbin.org/post"; let bodyBytes = Blob.toArray(Text.encodeUtf8(jsonPayload)); let request : HttpRequestArgs = { url = url; max_response_bytes = ?Nat64.fromNat(50_000); headers = [ { name = "Content-Type"; value = "application/json" }, { name = "User-Agent"; value = "ic-canister" }, // Idempotency key: prevents duplicate processing if multiple replicas hit the endpoint { name = "Idempotency-Key"; value = "unique-request-id-12345" }, ]; body = ?bodyBytes; method = #post; transform = ?{ function = transform; context = "" : Blob; }; }; // POST may cost more due to request body size let response = await (with cycles = 300_000_000) ic.http_request(request); let bodyBlob = Blob.fromArray(response.body); let body = Text.decodeUtf8(bodyBlob); switch (body) { case (?text) { text }; case (null) { Runtime.trap("Response is not valid UTF-8") }; }; }; }; ``` ### Rust ```toml # Cargo.toml [package] name = "https_outcalls_backend" version = "0.1.0" edition = "2021" [lib] crate-type = ["cdylib"] [dependencies] ic-cdk = "0.19" candid = "0.10" serde = { version = "1", features = ["derive"] } serde_json = "1" ``` ```rust use ic_cdk::api::canister_self; use ic_cdk::management_canister::{ http_request, HttpHeader, HttpMethod, HttpRequestArgs, HttpRequestResult, TransformArgs, TransformContext, TransformFunc, }; use ic_cdk::{query, update}; use serde::Deserialize; /// Transform function: strips non-deterministic headers so all replicas agree. /// MUST be a #[query] function. #[query] fn transform(args: TransformArgs) -> HttpRequestResult { HttpRequestResult { status: args.response.status, body: args.response.body, headers: vec![], // Strip all headers for consensus // If you need specific headers, filter them here: // headers: args.response.headers.into_iter() // .filter(|h| h.name.to_lowercase() == "content-type") // .collect(), } } /// GET request: Fetch JSON from an external API #[update] async fn fetch_price() -> String { let url = "https://api.coingecko.com/api/v3/simple/price?ids=internet-computer&vs_currencies=usd"; let request = HttpRequestArgs { url: url.to_string(), max_response_bytes: Some(10_000), method: HttpMethod::GET, headers: vec![ HttpHeader { name: "User-Agent".to_string(), value: "ic-canister".to_string(), }, ], body: None, transform: Some(TransformContext { function: TransformFunc::new(canister_self(), "transform".to_string()), context: vec![], }), is_replicated: None, }; // ic-cdk 0.19 automatically computes and attaches the required cycles match http_request(&request).await { Ok(response) => { let body = String::from_utf8(response.body) .unwrap_or_else(|_| "Invalid UTF-8 in response".to_string()); if response.status != candid::Nat::from(200u64) { return format!("HTTP error: status {}", response.status); } body } Err(err) => { format!("HTTP outcall failed: {:?}", err) } } } /// Typed response parsing example #[derive(Deserialize)] struct PriceResponse { #[serde(rename = "internet-computer")] internet_computer: PriceData, } #[derive(Deserialize)] struct PriceData { usd: f64, } #[update] async fn get_icp_price_usd() -> String { let body = fetch_price().await; match serde_json::from_str::(&body) { Ok(parsed) => format!("ICP price: ${:.2}", parsed.internet_computer.usd), Err(e) => format!("Failed to parse price response: {}", e), } } /// POST request: Send JSON data to an external API #[update] async fn post_data(json_payload: String) -> String { let url = "https://httpbin.org/post"; let request = HttpRequestArgs { url: url.to_string(), max_response_bytes: Some(50_000), method: HttpMethod::POST, headers: vec![ HttpHeader { name: "Content-Type".to_string(), value: "application/json".to_string(), }, HttpHeader { name: "User-Agent".to_string(), value: "ic-canister".to_string(), }, // Idempotency key: prevents duplicate processing across replicas HttpHeader { name: "Idempotency-Key".to_string(), value: "unique-request-id-12345".to_string(), }, ], body: Some(json_payload.into_bytes()), transform: Some(TransformContext { function: TransformFunc::new(canister_self(), "transform".to_string()), context: vec![], }), is_replicated: None, }; // ic-cdk 0.19 automatically computes and attaches the required cycles match http_request(&request).await { Ok(response) => { String::from_utf8(response.body) .unwrap_or_else(|_| "Invalid UTF-8 in response".to_string()) } Err(err) => { format!("HTTP outcall failed: {:?}", err) } } } ``` ### Cycle Cost Estimation ``` Base cost: 49_140_000 cycles + per request byte: 10_400 cycles + per response byte: 5_200 cycles + per request header: variable Example: GET request, 5KB response 49_140_000 + (0 * 10_400) + (5_120 * 5_200) = ~75_764_000 cycles Safe budget: 200_000_000 (200M) Example: POST request, 1KB body, 10KB response 49_140_000 + (1_024 * 10_400) + (10_240 * 5_200) = ~112_977_600 cycles Safe budget: 300_000_000 (300M) ``` Always over-budget. Unused cycles are refunded to the canister. ## Deploy & Test ### Local Deployment ```bash # Start the local replica icp network start -d # Deploy your canister icp deploy backend ``` Note: HTTPS outcalls work on the local replica. icp-cli proxies the requests through the local HTTP gateway. ### Mainnet Deployment ```bash # Ensure your canister has enough cycles (check balance first) icp canister status backend -e ic # Deploy icp deploy -e ic backend ``` ## Verify It Works ```bash # 1. Test the GET outcall (fetch price) icp canister call backend fetchPrice # Expected: Something like '("{\"internet-computer\":{\"usd\":12.34}}")' # (actual price will vary) # 2. Test the POST outcall icp canister call backend postData '("{\"test\": \"hello\"}")' # Expected: JSON response from httpbin.org echoing back your data # 3. If using Rust with the typed parser: icp canister call backend get_icp_price_usd # Expected: '("ICP price: $12.34")' # 4. Check canister cycle balance (outcalls consume cycles) icp canister status backend # Verify the balance decreased slightly after outcalls # 5. Test error handling: call with an unreachable URL # Add a test function that calls a non-existent domain and verify # it returns an error message rather than trapping ``` ### Debugging Outcall Failures If an outcall fails: ```bash # Check the replica log for detailed error messages # Local: icp output shows errors inline # Mainnet: check the canister logs # Common errors: # "Timeout" -- external server took too long (>30s) # "No consensus" -- transform function is missing or not stripping enough # "Body size exceeds limit" -- response > max_response_bytes # "Not enough cycles" -- attach more cycles to the call ``` ### Transform Debugging If you get "no consensus could be reached" errors, your transform function is not making responses identical. Common culprits: 1. **Response headers differ** -- strip ALL headers in the transform 2. **JSON field ordering differs** -- parse and re-serialize the JSON in the transform 3. **Timestamps in response body** -- extract only the fields you need Advanced transform that normalizes JSON: ```rust #[query] fn transform_normalize(args: TransformArgs) -> HttpRequestResult { // Parse and re-serialize to normalize field ordering let body = if let Ok(json) = serde_json::from_slice::(&args.response.body) { serde_json::to_vec(&json).unwrap_or(args.response.body) } else { args.response.body }; HttpRequestResult { status: args.response.status, body, headers: vec![], } } ``` --- --- name: ic-dashboard title: IC Dashboard APIs category: Integration description: "Use the public REST APIs that power dashboard.internetcomputer.org. Get data for canisters, ledgers, SNS, and metrics." endpoints: 12 version: 1.0.1 status: stable dependencies: [] requires: [] tags: [dashboard, api, rest, openapi, swagger, ic-api, icrc-api, sns-api, ledger-api, metrics-api] --- # IC Dashboard APIs ## What This Is These public REST APIs power **dashboard.internetcomputer.org**. They expose read-only access to canister metadata, ICRC ledgers, SNS data, the ICP ledger, and network metrics via OpenAPI specs and Swagger UI. Agents and scripts can call them over HTTPS from off-chain (no canister deployment or cycles required). **Prefer v2 or higher API versions** where available; they provide cursor-based pagination (`after`, `before`, `limit`) and are the same surface the dashboard uses. ## Prerequisites - Any HTTP client: `curl`, `fetch`, `axios`, or the language’s native HTTP library. - No `icp-cli` or canister deployment needed for read-only API access. - For OpenAPI-based codegen: optional use of the `openapi.json` URLs with your preferred OpenAPI tooling. ## API Base URLs and Docs | API | Base URL | OpenAPI spec | Swagger / Docs | Prefer | |-----|----------|--------------|----------------|--------| | IC API | `https://ic-api.internetcomputer.org` | `/api/v3/openapi.json` | `/api/v3/swagger` | v4 for canisters, subnets (cursor pagination) | | ICRC API | `https://icrc-api.internetcomputer.org` | `/openapi.json` | `/docs` | v2 for ledgers (TestICP and other ICRC tokens; **not** mainnet ICP) | | SNS API | `https://sns-api.internetcomputer.org` | `/openapi.json` | `/docs` | v2 for snses, proposals, neurons | | Ledger API (mainnet ICP) | `https://ledger-api.internetcomputer.org` | `/openapi.json` | `/swagger-ui/` | Use for **ICP token**; v2 for cursor pagination | | Metrics API | `https://metrics-api.internetcomputer.org` | `/api/v1/openapi.json` | `/api/v1/docs` | v1 (no newer version) | Full URLs for specs and UI: - IC API: https://ic-api.internetcomputer.org/api/v3/openapi.json — https://ic-api.internetcomputer.org/api/v3/swagger - ICRC API: https://icrc-api.internetcomputer.org/openapi.json — https://icrc-api.internetcomputer.org/docs - SNS API: https://sns-api.internetcomputer.org/openapi.json — https://sns-api.internetcomputer.org/docs - Ledger API: https://ledger-api.internetcomputer.org/openapi.json — https://ledger-api.internetcomputer.org/swagger-ui/ - Metrics API: https://metrics-api.internetcomputer.org/api/v1/openapi.json — https://metrics-api.internetcomputer.org/api/v1/docs ## How It Works 1. **Prefer v2+ APIs with cursor pagination.** IC API v4 (`/api/v4/canisters`, `/api/v4/subnets`), ICRC API v2 (`/api/v2/ledgers`, `/api/v2/ledgers/{id}/transactions`, etc.), and SNS API v2 (`/api/v2/snses`, `/api/v2/snses/{id}/proposals`, `/api/v2/snses/{id}/neurons`) use `after`, `before`, and `limit` for stable, efficient paging. Avoid v1/offset-based endpoints when a v2+ alternative exists. 2. **Choose the right API** for the data you need: IC API (canisters, subnets, NNS neurons/proposals), **Ledger API for mainnet ICP** (accounts, transactions, supply), ICRC API for **other** ICRC ledgers only (ckBTC, SNS tokens, testicp — ICRC API does not expose mainnet ICP), SNS API (SNS list, neurons, proposals), Metrics API (governance, cycles, Bitcoin, etc.). 3. **Use the OpenAPI spec** to get exact path, query, and body schemas and response shapes; prefer the spec over hand-written docs to avoid drift. 4. **Call over HTTPS** with `GET` (or documented method). Use the `next_cursor` / `previous_cursor` from v2+ responses to request the next or previous page. ## Mistakes That Break Your Build 1. **Wrong base URL or API version.** IC API uses `/api/v3/` (and v4 for canisters/subnets); ICRC has `/api/v1/` and `/api/v2/` (ICRC API does not serve mainnet ICP — use Ledger API). Ledger API uses unversioned paths for some endpoints (e.g. `/accounts`, `/supply/total/latest`) and `/v2/` for cursor-paginated lists. Metrics API uses `/api/v1/`. Using the wrong prefix returns 404 or wrong schema. 2. **Canister ID format.** Canister IDs in paths and queries must match the principal-like pattern: 27 characters, five groups of five plus a final three (e.g. `ryjl3-tyaaa-aaaaa-aaaba-cai`). Subnet IDs use the longer pattern (e.g. 63 chars). Sending a raw principal string in the wrong encoding or length causes 422 or 400. 3. **Using ICRC API for mainnet ICP.** ICRC API exposes **test ICP (TestICP) only**, not mainnet ICP. For mainnet ICP token data (accounts, transactions, supply) use **Ledger API** (`ledger-api.internetcomputer.org`). Use ICRC API for other ICRC ledgers (e.g. ckBTC, SNS tokens) and for TestICP. 4. **ICRC API: ledger_canister_id in path.** ICRC endpoints require `ledger_canister_id` in the path (e.g. `/api/v2/ledgers/{ledger_canister_id}/transactions`). Use the canister ID of the ledger you want (e.g. ckBTC `mxzaz-hqaaa-aaaar-qaada-cai`). Do not use ICRC API for mainnet ICP — use Ledger API instead. 5. **Using v1 or offset-based pagination when v2+ exists.** Always prefer v2 or higher endpoints that support cursor pagination (`after`, `before`, `limit`). IC API v4 (canisters, subnets), ICRC API v2 (ledgers, accounts, transactions), and SNS API v2 (snses, proposals, neurons) return `next_cursor`/`previous_cursor` and accept cursor query params. Older v1/offset/`max_*_index` endpoints are legacy; using the wrong pagination model returns empty or incorrect pages. 6. **Timestamps.** Most time-range query params (`start`, `end`) expect Unix seconds (integer). Sending milliseconds or ISO strings causes validation errors (422). 7. **Account identifier format.** Ledger API and ICRC/ICP endpoints use **account identifiers** (hex hashes), not raw principals, for account-specific paths. Use the same encoding the API documents (e.g. 64-char hex for account_identifier where required). 8. **Assuming authentication.** These public dashboard APIs do not require API keys or auth for the documented read endpoints. If you get 401/403, confirm you are not hitting a different environment or a write endpoint that requires auth. ## Implementation ### IC API — Canisters and subnets (prefer v4 with cursor pagination) ```bash # List canisters (v4: cursor pagination, next_cursor/previous_cursor in response) curl -s "https://ic-api.internetcomputer.org/api/v4/canisters?limit=5" # Next page: use after= from previous response's next_cursor (see OpenAPI for cursor format) # curl -s "https://ic-api.internetcomputer.org/api/v4/canisters?limit=5&after=..." # Get one canister by ID (v3; no v4 single-canister endpoint) curl -s "https://ic-api.internetcomputer.org/api/v3/canisters/ryjl3-tyaaa-aaaaa-aaaba-cai" # List subnets (v4: cursor pagination) curl -s "https://ic-api.internetcomputer.org/api/v4/subnets?limit=10" # List NNS proposals (v3; use limit) curl -s "https://ic-api.internetcomputer.org/api/v3/proposals?limit=5" ``` ### ICRC API — Other ICRC ledgers only (v2 with cursor pagination) ICRC API exposes **TestICP and other ICRC ledgers (e.g. ckBTC, SNS tokens), not mainnet ICP.** For mainnet ICP use Ledger API. ```bash # List ledgers (v2: after/before/limit, next_cursor/previous_cursor in response) curl -s "https://icrc-api.internetcomputer.org/api/v2/ledgers?limit=10" # Get one ledger (e.g. ckBTC — mainnet ICP is not on ICRC API) curl -s "https://icrc-api.internetcomputer.org/api/v2/ledgers/mxzaz-hqaaa-aaaar-qaada-cai" # List transactions for a ledger (v2: cursor pagination) curl -s "https://icrc-api.internetcomputer.org/api/v2/ledgers/mxzaz-hqaaa-aaaar-qaada-cai/transactions?limit=5" # List accounts for a ledger (v2: after/before/limit) curl -s "https://icrc-api.internetcomputer.org/api/v2/ledgers/mxzaz-hqaaa-aaaar-qaada-cai/accounts?limit=10" ``` ### SNS API — SNS list and proposals (prefer v2 with cursor pagination) ```bash # List SNSes (v2: after/before/limit, next_cursor/previous_cursor) curl -s "https://sns-api.internetcomputer.org/api/v2/snses?limit=10" # List proposals for an SNS root canister (v2: cursor pagination) # Replace ROOT_CANISTER_ID with a real SNS root canister ID curl -s "https://sns-api.internetcomputer.org/api/v2/snses/ROOT_CANISTER_ID/proposals?limit=5" # List neurons for an SNS (v2: after/before/limit) curl -s "https://sns-api.internetcomputer.org/api/v2/snses/ROOT_CANISTER_ID/neurons?limit=10" ``` ### Ledger API — Mainnet ICP token (prefer v2 for cursor pagination) ```bash # List accounts (v2: after/before/limit, next_cursor/prev_cursor) curl -s "https://ledger-api.internetcomputer.org/v2/accounts?limit=10" # Get account by account_identifier (64-char hex) curl -s "https://ledger-api.internetcomputer.org/accounts/ACCOUNT_IDENTIFIER" # List transactions (v2: cursor pagination) curl -s "https://ledger-api.internetcomputer.org/v2/transactions?limit=10" # Total supply (latest) curl -s "https://ledger-api.internetcomputer.org/supply/total/latest" ``` ### Metrics API ```bash # Average cycle burn rate curl -s "https://metrics-api.internetcomputer.org/api/v1/average-cycle-burn-rate" # Governance metrics curl -s "https://metrics-api.internetcomputer.org/api/v1/governance-metrics" # ICP/XDR conversion rates (with optional start/end/step) curl -s "https://metrics-api.internetcomputer.org/api/v1/icp-xdr-conversion-rates?start=1700000000&end=1700086400&step=86400" ``` ### Fetching OpenAPI spec (for codegen or validation) ```bash # IC API v3 curl -s "https://ic-api.internetcomputer.org/api/v3/openapi.json" -o ic-api-v3.json # ICRC API curl -s "https://icrc-api.internetcomputer.org/openapi.json" -o icrc-api.json # SNS API curl -s "https://sns-api.internetcomputer.org/openapi.json" -o sns-api.json # Ledger API curl -s "https://ledger-api.internetcomputer.org/openapi.json" -o ledger-api.json # Metrics API v1 curl -s "https://metrics-api.internetcomputer.org/api/v1/openapi.json" -o metrics-api-v1.json ``` ## Deploy & Test No canister deployment is required. These are external HTTP APIs. Test from the shell or your app: ```bash # Smoke test: IC API root curl -s -o /dev/null -w "%{http_code}" "https://ic-api.internetcomputer.org/api/v3/" # Expected: 200 # Smoke test: ICRC ledgers list curl -s -o /dev/null -w "%{http_code}" "https://icrc-api.internetcomputer.org/api/v2/ledgers?limit=1" # Expected: 200 ``` ## Verify It Works ```bash # 1. IC API returns canister list with data array curl -s "https://ic-api.internetcomputer.org/api/v3/canisters?limit=1" | head -c 200 # Expected: JSON with "data" or similar key and at least one canister # 2. ICRC API returns ledger list curl -s "https://icrc-api.internetcomputer.org/api/v2/ledgers?limit=1" | head -c 200 # Expected: JSON with "data" and ledger entries # 3. Ledger API returns supply (array of [timestamp, value]) curl -s "https://ledger-api.internetcomputer.org/supply/total/latest" # Expected: JSON array with two elements (timestamp and supply string) # 4. OpenAPI specs are valid JSON curl -s "https://ic-api.internetcomputer.org/api/v3/openapi.json" | python3 -c "import sys,json; json.load(sys.stdin); print('OK')" # Expected: OK ``` --- --- name: icrc-ledger title: ICRC Ledger Standard category: Tokens description: "Deploy and interact with ICRC-1/ICRC-2 token ledgers. Minting, approvals, transfers, and metadata." endpoints: 11 version: 2.3.4 status: stable dependencies: [] requires: [icp-cli >= 0.1.0, mops, ic-cdk >= 0.19] tags: [token, icrc1, icrc2, ledger, transfer, approve, mint, balance] --- # ICRC Ledger Standards ## What This Is ICRC-1 is the fungible token standard on Internet Computer, defining transfer, balance, and metadata interfaces. ICRC-2 extends it with approve/transferFrom (allowance) mechanics, enabling third-party spending like ERC-20 on Ethereum. ## Prerequisites - icp-cli >= 0.1.0 (install: `brew install dfinity/tap/icp-cli`) - For Motoko: mops with `core = "2.0.0"` in mops.toml - For Rust: `ic-cdk = "0.19"`, `candid = "0.10"`, `icrc-ledger-types = "0.1"` in Cargo.toml ## Canister IDs | Token | Ledger Canister ID | Fee | Decimals | |-------|-------------------|-----|----------| | ICP | `ryjl3-tyaaa-aaaaa-aaaba-cai` | 10000 e8s (0.0001 ICP) | 8 | | ckBTC | `mxzaz-hqaaa-aaaar-qaada-cai` | 10 satoshis | 8 | | ckETH | `ss2fx-dyaaa-aaaar-qacoq-cai` | 2000000000000 wei (0.000002 ETH) | 18 | Index canisters (for transaction history): - ICP Index: `qhbym-qaaaa-aaaaa-aaafq-cai` - ckBTC Index: `n5wcd-faaaa-aaaar-qaaea-cai` - ckETH Index: `s3zol-vqaaa-aaaar-qacpa-cai` ## Mistakes That Break Your Build 1. **Wrong fee amount** -- ICP fee is 10000 e8s, NOT 10000 ICP. ckBTC fee is 10 satoshis, NOT 10 ckBTC. Using the wrong unit drains your entire balance in one transfer. 2. **Forgetting approve before transferFrom** -- ICRC-2 transferFrom will reject with `InsufficientAllowance` if the token owner has not called `icrc2_approve` first. This is a two-step flow: owner approves, then spender calls transferFrom. 3. **Not handling Err variants** -- `icrc1_transfer` returns `Result`, not just `Nat`. The error variants are: `BadFee`, `BadBurn`, `InsufficientFunds`, `TooOld`, `CreatedInFuture`, `Duplicate`, `TemporarilyUnavailable`, `GenericError`. You must match on every variant or at minimum propagate the error. 4. **Using wrong Account format** -- An ICRC-1 Account is `{ owner: Principal; subaccount: ?Blob }`, NOT just a Principal. The subaccount is a 32-byte blob. Passing null/None for subaccount uses the default subaccount (all zeros). 5. **Omitting created_at_time** -- Without `created_at_time`, you lose deduplication protection. Two identical transfers submitted within 24h will both execute. Set `created_at_time` to `Time.now()` (Motoko) or `ic_cdk::api::time()` (Rust) for dedup. 6. **Hardcoding canister IDs as text** -- Always use `Principal.fromText("ryjl3-tyaaa-aaaaa-aaaba-cai")` (Motoko) or `Principal::from_text("ryjl3-tyaaa-aaaaa-aaaba-cai")` (Rust). Never pass raw strings where a Principal is expected. 7. **Calling ledger from frontend** -- ICRC-1 transfers should originate from a backend canister, not directly from the frontend. Frontend-initiated transfers expose the user to reentrancy and can bypass business logic. Use a backend canister as the intermediary. 8. **Shell substitution in `--argument-file` / `init_arg_file`** -- Expressions like `$(icp identity principal)` do NOT expand inside files referenced by `init_arg_file` or `--argument-file`. The file is read as literal text. Either use `--argument` on the command line (where the shell expands variables), or pre-generate the file with `envsubst` / `sed` before deploying. ## Implementation ### Motoko #### Imports and Types ```motoko import Principal "mo:core/Principal"; import Nat "mo:core/Nat"; import Nat8 "mo:core/Nat8"; import Nat64 "mo:core/Nat64"; import Blob "mo:core/Blob"; import Time "mo:core/Time"; import Int "mo:core/Int"; import Runtime "mo:core/Runtime"; ``` #### Define the ICRC-1 Actor Interface ```motoko persistent actor { type Account = { owner : Principal; subaccount : ?Blob; }; type TransferArg = { from_subaccount : ?Blob; to : Account; amount : Nat; fee : ?Nat; memo : ?Blob; created_at_time : ?Nat64; }; type TransferError = { #BadFee : { expected_fee : Nat }; #BadBurn : { min_burn_amount : Nat }; #InsufficientFunds : { balance : Nat }; #TooOld; #CreatedInFuture : { ledger_time : Nat64 }; #Duplicate : { duplicate_of : Nat }; #TemporarilyUnavailable; #GenericError : { error_code : Nat; message : Text }; }; type ApproveArg = { from_subaccount : ?Blob; spender : Account; amount : Nat; expected_allowance : ?Nat; expires_at : ?Nat64; fee : ?Nat; memo : ?Blob; created_at_time : ?Nat64; }; type ApproveError = { #BadFee : { expected_fee : Nat }; #InsufficientFunds : { balance : Nat }; #AllowanceChanged : { current_allowance : Nat }; #Expired : { ledger_time : Nat64 }; #TooOld; #CreatedInFuture : { ledger_time : Nat64 }; #Duplicate : { duplicate_of : Nat }; #TemporarilyUnavailable; #GenericError : { error_code : Nat; message : Text }; }; type TransferFromArg = { spender_subaccount : ?Blob; from : Account; to : Account; amount : Nat; fee : ?Nat; memo : ?Blob; created_at_time : ?Nat64; }; type TransferFromError = { #BadFee : { expected_fee : Nat }; #BadBurn : { min_burn_amount : Nat }; #InsufficientFunds : { balance : Nat }; #InsufficientAllowance : { allowance : Nat }; #TooOld; #CreatedInFuture : { ledger_time : Nat64 }; #Duplicate : { duplicate_of : Nat }; #TemporarilyUnavailable; #GenericError : { error_code : Nat; message : Text }; }; // Remote ledger actor reference (ICP ledger shown; swap canister ID for other tokens) transient let icpLedger = actor ("ryjl3-tyaaa-aaaaa-aaaba-cai") : actor { icrc1_balance_of : shared query (Account) -> async Nat; icrc1_transfer : shared (TransferArg) -> async { #Ok : Nat; #Err : TransferError }; icrc2_approve : shared (ApproveArg) -> async { #Ok : Nat; #Err : ApproveError }; icrc2_transfer_from : shared (TransferFromArg) -> async { #Ok : Nat; #Err : TransferFromError }; icrc1_fee : shared query () -> async Nat; icrc1_decimals : shared query () -> async Nat8; }; // Check balance public func getBalance(who : Principal) : async Nat { await icpLedger.icrc1_balance_of({ owner = who; subaccount = null; }) }; // Transfer tokens (this canister sends from its own account) // WARNING: Add access control in production — this allows any caller to transfer tokens public func sendTokens(to : Principal, amount : Nat) : async Nat { let now = Nat64.fromNat(Int.abs(Time.now())); let result = await icpLedger.icrc1_transfer({ from_subaccount = null; to = { owner = to; subaccount = null }; amount = amount; fee = ?10000; // ICP fee: 10000 e8s memo = null; created_at_time = ?now; }); switch (result) { case (#Ok(blockIndex)) { blockIndex }; case (#Err(#InsufficientFunds({ balance }))) { Runtime.trap("Insufficient funds. Balance: " # Nat.toText(balance)) }; case (#Err(#BadFee({ expected_fee }))) { Runtime.trap("Wrong fee. Expected: " # Nat.toText(expected_fee)) }; case (#Err(_)) { Runtime.trap("Transfer failed") }; } }; // ICRC-2: Approve a spender public shared ({ caller }) func approveSpender(spender : Principal, amount : Nat) : async Nat { // caller is captured at function entry in Motoko -- safe across await let now = Nat64.fromNat(Int.abs(Time.now())); let result = await icpLedger.icrc2_approve({ from_subaccount = null; spender = { owner = spender; subaccount = null }; amount = amount; expected_allowance = null; expires_at = null; fee = ?10000; memo = null; created_at_time = ?now; }); switch (result) { case (#Ok(blockIndex)) { blockIndex }; case (#Err(_)) { Runtime.trap("Approve failed") }; } }; // ICRC-2: Transfer from another account (requires prior approval) // WARNING: Add access control in production — this allows any caller to transfer tokens public func transferFrom(from : Principal, to : Principal, amount : Nat) : async Nat { let now = Nat64.fromNat(Int.abs(Time.now())); let result = await icpLedger.icrc2_transfer_from({ spender_subaccount = null; from = { owner = from; subaccount = null }; to = { owner = to; subaccount = null }; amount = amount; fee = ?10000; memo = null; created_at_time = ?now; }); switch (result) { case (#Ok(blockIndex)) { blockIndex }; case (#Err(#InsufficientAllowance({ allowance }))) { Runtime.trap("Insufficient allowance: " # Nat.toText(allowance)) }; case (#Err(_)) { Runtime.trap("TransferFrom failed") }; } }; } ``` ### Rust #### Cargo.toml Dependencies ```toml [package] name = "icrc_ledger_backend" version = "0.1.0" edition = "2021" [lib] crate-type = ["cdylib"] [dependencies] ic-cdk = "0.19" candid = "0.10" icrc-ledger-types = "0.1" serde = { version = "1", features = ["derive"] } ``` #### Complete Implementation ```rust use candid::{Nat, Principal}; use icrc_ledger_types::icrc1::account::Account; use icrc_ledger_types::icrc1::transfer::{TransferArg, TransferError}; use icrc_ledger_types::icrc2::approve::{ApproveArgs, ApproveError}; use icrc_ledger_types::icrc2::transfer_from::{TransferFromArgs, TransferFromError}; use ic_cdk::update; use ic_cdk::call::Call; const ICP_LEDGER: &str = "ryjl3-tyaaa-aaaaa-aaaba-cai"; const ICP_FEE: u64 = 10_000; // 10000 e8s fn ledger_id() -> Principal { Principal::from_text(ICP_LEDGER).unwrap() } // Check balance #[update] async fn get_balance(who: Principal) -> Nat { let account = Account { owner: who, subaccount: None, }; let (balance,): (Nat,) = Call::unbounded_wait(ledger_id(), "icrc1_balance_of") .with_arg(account) .await .expect("Failed to call icrc1_balance_of") .candid_tuple() .expect("Failed to decode response"); balance } // Transfer tokens from this canister's account // WARNING: Add access control in production — this allows any caller to transfer tokens #[update] async fn send_tokens(to: Principal, amount: Nat) -> Result { let transfer_arg = TransferArg { from_subaccount: None, to: Account { owner: to, subaccount: None, }, amount, fee: Some(Nat::from(ICP_FEE)), memo: None, created_at_time: Some(ic_cdk::api::time()), }; let (result,): (Result,) = Call::unbounded_wait(ledger_id(), "icrc1_transfer") .with_arg(transfer_arg) .await .map_err(|e| format!("Call failed: {:?}", e))? .candid_tuple() .map_err(|e| format!("Decode failed: {:?}", e))?; match result { Ok(block_index) => Ok(block_index), Err(TransferError::InsufficientFunds { balance }) => { Err(format!("Insufficient funds. Balance: {}", balance)) } Err(TransferError::BadFee { expected_fee }) => { Err(format!("Wrong fee. Expected: {}", expected_fee)) } Err(e) => Err(format!("Transfer error: {:?}", e)), } } // ICRC-2: Approve a spender #[update] async fn approve_spender(spender: Principal, amount: Nat) -> Result { let args = ApproveArgs { from_subaccount: None, spender: Account { owner: spender, subaccount: None, }, amount, expected_allowance: None, expires_at: None, fee: Some(Nat::from(ICP_FEE)), memo: None, created_at_time: Some(ic_cdk::api::time()), }; let (result,): (Result,) = Call::unbounded_wait(ledger_id(), "icrc2_approve") .with_arg(args) .await .map_err(|e| format!("Call failed: {:?}", e))? .candid_tuple() .map_err(|e| format!("Decode failed: {:?}", e))?; result.map_err(|e| format!("Approve error: {:?}", e)) } // ICRC-2: Transfer from another account (requires prior approval) // WARNING: Add access control in production — this allows any caller to transfer tokens #[update] async fn transfer_from(from: Principal, to: Principal, amount: Nat) -> Result { let args = TransferFromArgs { spender_subaccount: None, from: Account { owner: from, subaccount: None, }, to: Account { owner: to, subaccount: None, }, amount, fee: Some(Nat::from(ICP_FEE)), memo: None, created_at_time: Some(ic_cdk::api::time()), }; let (result,): (Result,) = Call::unbounded_wait(ledger_id(), "icrc2_transfer_from") .with_arg(args) .await .map_err(|e| format!("Call failed: {:?}", e))? .candid_tuple() .map_err(|e| format!("Decode failed: {:?}", e))?; result.map_err(|e| format!("TransferFrom error: {:?}", e)) } ``` ## Deploy & Test ### Deploy a Local ICRC-1 Ledger for Testing Add to `icp.yaml`: Pin the release version before deploying: get the latest release tag from https://github.com/dfinity/ic/releases?q=%22ledger-suite-icrc%22&expanded=false, then substitute it for `` in both URLs below. ```yaml canisters: icrc1_ledger: name: icrc1_ledger recipe: type: custom candid: "https://github.com/dfinity/ic/releases/download//ledger.did" wasm: "https://github.com/dfinity/ic/releases/download//ic-icrc1-ledger.wasm.gz" config: init_arg_file: "icrc1_ledger_init.args" ``` Create `icrc1_ledger_init.args` (replace `YOUR_PRINCIPAL` with the output of `icp identity principal`): > **Pitfall:** Shell substitutions like `$(icp identity principal)` will NOT expand inside this file. You must paste the literal principal string. ``` (variant { Init = record { token_symbol = "TEST"; token_name = "Test Token"; minting_account = record { owner = principal "YOUR_PRINCIPAL" }; transfer_fee = 10_000 : nat; metadata = vec {}; initial_balances = vec { record { record { owner = principal "YOUR_PRINCIPAL" }; 100_000_000_000 : nat; }; }; archive_options = record { num_blocks_to_archive = 1000 : nat64; trigger_threshold = 2000 : nat64; controller_id = principal "YOUR_PRINCIPAL"; }; feature_flags = opt record { icrc2 = true }; }}) ``` Deploy: ```bash # Start local replica icp network start -d # Deploy the ledger icp deploy icrc1_ledger # Verify it deployed icp canister id icrc1_ledger ``` ### Interact with Mainnet Ledgers ```bash # Check ICP balance icp canister call ryjl3-tyaaa-aaaaa-aaaba-cai icrc1_balance_of \ "(record { owner = principal \"$(icp identity principal)\"; subaccount = null })" \ -e ic # Check token metadata icp canister call ryjl3-tyaaa-aaaaa-aaaba-cai icrc1_metadata '()' -e ic # Check fee icp canister call ryjl3-tyaaa-aaaaa-aaaba-cai icrc1_fee '()' -e ic # Transfer ICP (amount in e8s: 100000000 = 1 ICP) icp canister call ryjl3-tyaaa-aaaaa-aaaba-cai icrc1_transfer \ "(record { to = record { owner = principal \"TARGET_PRINCIPAL_HERE\"; subaccount = null }; amount = 100_000_000 : nat; fee = opt (10_000 : nat); memo = null; from_subaccount = null; created_at_time = null; })" -e ic ``` ## Verify It Works ### Local Ledger Verification ```bash # 1. Check your balance (should show initial minted amount) icp canister call icrc1_ledger icrc1_balance_of \ "(record { owner = principal \"$(icp identity principal)\"; subaccount = null })" # Expected: (100_000_000_000 : nat) # 2. Check fee icp canister call icrc1_ledger icrc1_fee '()' # Expected: (10_000 : nat) # 3. Check decimals icp canister call icrc1_ledger icrc1_decimals '()' # Expected: (8 : nat8) # 4. Check symbol icp canister call icrc1_ledger icrc1_symbol '()' # Expected: ("TEST") # 5. Transfer to another identity icp identity new test-recipient --storage plaintext 2>/dev/null RECIPIENT=$(icp identity principal --identity test-recipient) icp canister call icrc1_ledger icrc1_transfer \ "(record { to = record { owner = principal \"$RECIPIENT\"; subaccount = null }; amount = 1_000_000 : nat; fee = opt (10_000 : nat); memo = null; from_subaccount = null; created_at_time = null; })" # Expected: (variant { Ok = 0 : nat }) # 6. Verify recipient balance icp canister call icrc1_ledger icrc1_balance_of \ "(record { owner = principal \"$RECIPIENT\"; subaccount = null })" # Expected: (1_000_000 : nat) ``` ### Mainnet Verification ```bash # Verify ICP ledger is reachable icp canister call ryjl3-tyaaa-aaaaa-aaaba-cai icrc1_symbol '()' -e ic # Expected: ("ICP") # Verify ckBTC ledger is reachable icp canister call mxzaz-hqaaa-aaaar-qaada-cai icrc1_symbol '()' -e ic # Expected: ("ckBTC") # Verify ckETH ledger is reachable icp canister call ss2fx-dyaaa-aaaar-qacoq-cai icrc1_symbol '()' -e ic # Expected: ("ckETH") ``` --- --- name: internet-identity title: Internet Identity Auth category: Auth description: "Integrate Internet Identity authentication into frontend and backend canisters. Delegation, session management, and anchor handling." endpoints: 6 version: 5.0.3 status: stable dependencies: [asset-canister] requires: [icp-cli >= 0.1.0, @icp-sdk/auth >= 5.0, @icp-sdk/core >= 5.0] tags: [auth, login, passkey, webauthn, identity, session, delegation, principal] --- # Internet Identity Authentication ## What This Is Internet Identity (II) is the Internet Computer's native authentication system. Users authenticate with passkeys, WebAuthn, or hardware security keys -- no passwords, no seed phrases, no third-party identity providers. Each user gets a unique principal per dApp, preventing cross-app tracking. ## Prerequisites - icp-cli >= 0.1.0 (`brew install dfinity/tap/icp-cli`) - Node.js >= 18 (for frontend) - `@icp-sdk/auth` npm package (>= 5.0.0) - `@icp-sdk/core` npm package (>= 5.0.0) ## Canister IDs | Environment | Canister ID | URL | |-------------|-------------|-----| | Mainnet | `rdmx6-jaaaa-aaaaa-aaadq-cai` | `https://identity.ic0.app` (also `https://identity.internetcomputer.org`) | | Local | Assigned on deploy | `http://.localhost:4943` | ## Mistakes That Break Your Build 1. **Not rejecting anonymous principal.** The anonymous principal `2vxsx-fae` is sent when a user is not authenticated. If your backend does not explicitly reject it, unauthenticated users can call protected endpoints. ALWAYS check `Principal.isAnonymous(caller)` and reject. 2. **Using the wrong II URL for the environment.** Local development must point to `http://.localhost:4943` (this canister ID is different from mainnet). Mainnet must use `https://identity.ic0.app`. Hardcoding one breaks the other. The local II canister ID is assigned dynamically when you run `icp deploy internet_identity` -- read it from `process.env.CANISTER_ID_INTERNET_IDENTITY` (note: this auto-generated env var may work differently in icp-cli than it did in the legacy tooling; verify your build tooling picks it up) or your canister_ids.json (path may differ in icp-cli projects compared to the legacy `.icp/local/canister_ids.json` location). 3. **Setting delegation expiry too long.** Maximum delegation expiry is 30 days (2_592_000_000_000_000 nanoseconds). Longer values are silently clamped, which causes confusing session behavior. Use 8 hours for normal apps, 30 days maximum for "remember me" flows. 4. **Not handling auth callbacks.** The `authClient.login()` call requires `onSuccess` and `onError` callbacks. Without them, login failures are silently swallowed. 5. **Defensive practice: bind `msg_caller()` before `.await` in Rust.** The current ic-cdk executor preserves the caller across `.await` points, but capturing it early guards against future executor changes. Always bind `let caller = ic_cdk::api::msg_caller();` at the top of async update functions. 6. **Passing principal as string to backend.** The `AuthClient` gives you an `Identity` object. Backend canister methods receive the caller principal automatically via the IC protocol -- you do not pass it as a function argument. Use `shared(msg) { msg.caller }` in Motoko or `ic_cdk::api::msg_caller()` in Rust. 7. **Not calling `agent.fetchRootKey()` in local development.** Without this, certificate verification fails on localhost. Never call it in production -- it's a security risk on mainnet. 8. **Storing auth state in `thread_local!` without stable storage (Rust)** -- `thread_local! { RefCell }` is heap memory, wiped on every canister upgrade. Use `StableCell` from `ic-stable-structures` for any state that must persist across upgrades, especially ownership/auth data. ## Implementation ### icp.yaml Configuration For local development, download the II canister WASM from the [dfinity/internet-identity releases](https://github.com/dfinity/internet-identity/releases). Place the `.wasm.gz` and `.did` files in your project. ```yaml canisters: internet_identity: type: custom candid: deps/internet-identity/internet_identity.did wasm: deps/internet-identity/internet_identity_dev.wasm.gz build: "" remote: id: ic: rdmx6-jaaaa-aaaaa-aaadq-cai ``` The `remote.id.ic` field tells `icp` to skip deploying this canister on mainnet (use the existing one). Locally, `icp` deploys the provided WASM. ### Frontend: Vanilla JavaScript/TypeScript Login Flow This is framework-agnostic. Adapt the DOM manipulation to your framework. ```javascript import { AuthClient } from "@icp-sdk/auth/client"; import { HttpAgent, Actor } from "@icp-sdk/core/agent"; // 1. Create the auth client const authClient = await AuthClient.create(); // 2. Determine II URL based on environment // The local II canister gets a different canister ID each time you deploy it. // Pass it via an environment variable at build time (e.g., Vite: import.meta.env.VITE_II_CANISTER_ID). function getIdentityProviderUrl() { const host = window.location.hostname; const isLocal = host === "localhost" || host === "127.0.0.1" || host.endsWith(".localhost"); if (isLocal) { // Read from env variable set during build, or from canister_ids.json // For Vite: define VITE_II_CANISTER_ID in .env.local // For webpack: use DefinePlugin with process.env.II_CANISTER_ID const iiCanisterId = import.meta.env.VITE_II_CANISTER_ID ?? process.env.CANISTER_ID_INTERNET_IDENTITY // auto-generated by build tooling (verify this works with icp-cli) ?? "be2us-64aaa-aaaaa-qaabq-cai"; // fallback -- replace with your actual local II canister ID return `http://${iiCanisterId}.localhost:4943`; } return "https://identity.ic0.app"; } // 3. Login async function login() { return new Promise((resolve, reject) => { authClient.login({ identityProvider: getIdentityProviderUrl(), maxTimeToLive: BigInt(8) * BigInt(3_600_000_000_000), // 8 hours in nanoseconds onSuccess: () => { const identity = authClient.getIdentity(); const principal = identity.getPrincipal().toText(); console.log("Logged in as:", principal); resolve(identity); }, onError: (error) => { console.error("Login failed:", error); reject(error); }, }); }); } // 4. Create an authenticated agent and actor async function createAuthenticatedActor(identity, canisterId, idlFactory) { const isLocal = window.location.hostname === "localhost" || window.location.hostname === "127.0.0.1" || window.location.hostname.endsWith(".localhost"); const agent = await HttpAgent.create({ identity, host: isLocal ? "http://localhost:4943" : "https://icp-api.io", ...(isLocal && { shouldFetchRootKey: true, verifyQuerySignatures: false }), }); return Actor.createActor(idlFactory, { agent, canisterId }); } // 5. Logout async function logout() { await authClient.logout(); // Optionally reload or reset UI state } // 6. Check if already authenticated (on page load) const isAuthenticated = await authClient.isAuthenticated(); if (isAuthenticated) { const identity = authClient.getIdentity(); // Restore session -- create actor, update UI } ``` ### Backend: Motoko ```motoko import Principal "mo:core/Principal"; import Runtime "mo:core/Runtime"; persistent actor { // Owner/admin principal var owner : ?Principal = null; // Helper: reject anonymous callers func requireAuth(caller : Principal) : () { if (Principal.isAnonymous(caller)) { Runtime.trap("Anonymous principal not allowed. Please authenticate."); }; }; // Initialize the first authenticated caller as owner public shared (msg) func initOwner() : async Text { requireAuth(msg.caller); switch (owner) { case (null) { owner := ?msg.caller; "Owner set to " # Principal.toText(msg.caller); }; case (?_existing) { "Owner already initialized"; }; }; }; // Owner-only endpoint example public shared (msg) func adminAction() : async Text { requireAuth(msg.caller); switch (owner) { case (?o) { if (o != msg.caller) { Runtime.trap("Only the owner can call this function."); }; "Admin action performed"; }; case (null) { Runtime.trap("Owner not set. Call initOwner first."); }; }; }; // Public query: anyone can call, but returns different data for authenticated users public shared query (msg) func whoAmI() : async Text { if (Principal.isAnonymous(msg.caller)) { "You are not authenticated (anonymous)"; } else { "Your principal: " # Principal.toText(msg.caller); }; }; // Getting caller principal in shared functions // ALWAYS use `shared (msg)` or `shared ({ caller })` syntax: public shared ({ caller }) func protectedEndpoint(data : Text) : async Bool { requireAuth(caller); // Use `caller` for authorization checks true; }; }; ``` ### Backend: Rust ```toml # Cargo.toml [package] name = "ii_backend" version = "0.1.0" edition = "2021" [lib] crate-type = ["cdylib"] [dependencies] ic-cdk = "0.19" candid = "0.10" serde = { version = "1", features = ["derive"] } ic-stable-structures = "0.7" ``` ```rust use candid::Principal; use ic_cdk::{query, update}; use ic_stable_structures::{DefaultMemoryImpl, StableCell}; use std::cell::RefCell; thread_local! { // Principal::anonymous() is used as the "not set" sentinel. // Option does not implement Storable, so we store Principal directly. static OWNER: RefCell> = RefCell::new( StableCell::init(DefaultMemoryImpl::default(), Principal::anonymous()) ); } /// Reject anonymous principal. Call this at the top of every protected endpoint. fn require_auth() -> Principal { let caller = ic_cdk::api::msg_caller(); if caller == Principal::anonymous() { ic_cdk::trap("Anonymous principal not allowed. Please authenticate."); } caller } #[update] fn init_owner() -> String { // Defensive: capture caller before any .await calls. let caller = require_auth(); OWNER.with(|owner| { let mut cell = owner.borrow_mut(); let current = *cell.get(); if current == Principal::anonymous() { cell.set(caller); format!("Owner set to {}", caller) } else { "Owner already initialized".to_string() } }) } #[update] fn admin_action() -> String { let caller = require_auth(); OWNER.with(|owner| { let cell = owner.borrow(); let current = *cell.get(); if current == Principal::anonymous() { ic_cdk::trap("Owner not set. Call init_owner first."); } else if current == caller { "Admin action performed".to_string() } else { ic_cdk::trap("Only the owner can call this function."); } }) } #[query] fn who_am_i() -> String { let caller = ic_cdk::api::msg_caller(); if caller == Principal::anonymous() { "You are not authenticated (anonymous)".to_string() } else { format!("Your principal: {}", caller) } } // For async functions, capture caller before await as defensive practice: #[update] async fn protected_async_action() -> String { let caller = require_auth(); // Capture before any await let _result = some_async_operation().await; format!("Action completed by {}", caller) } ``` **Rust defensive practice:** Bind `let caller = ic_cdk::api::msg_caller();` at the top of async update functions. The current ic-cdk executor preserves caller across `.await` points via protected tasks, but capturing it early guards against future executor changes. ## Deploy & Test ### Local Deployment ```bash # Start the local replica icp network start -d # Deploy II canister and your backend icp deploy internet_identity icp deploy backend # Verify II is running icp canister status internet_identity ``` ### Mainnet Deployment ```bash # II is already on mainnet -- only deploy your canisters icp deploy -e ic backend ``` ## Verify It Works ```bash # 1. Check II canister is running icp canister status internet_identity # Expected: Status: Running # 2. Test anonymous rejection from CLI icp canister call backend adminAction # Expected: Error containing "Anonymous principal not allowed" # 3. Test whoAmI as anonymous icp canister call backend whoAmI # Expected: ("You are not authenticated (anonymous)") # 4. Test whoAmI as authenticated identity icp canister call backend whoAmI # Expected: ("Your principal: ") # Note: icp CLI calls use the current identity, not anonymous, # unless you explicitly use --identity anonymous # 5. Test with explicit anonymous identity icp identity use anonymous icp canister call backend adminAction # Expected: Error containing "Anonymous principal not allowed" icp identity use default # Switch back # 6. Open II in browser for local dev # Visit: http://.localhost:4943 # You should see the Internet Identity login page ``` --- --- name: multi-canister title: Multi-Canister Architecture category: Architecture description: "Design and deploy multi-canister dapps with inter-canister calls, shared state patterns, and upgrade strategies." endpoints: 8 version: 3.1.1 status: stable dependencies: [stable-memory] requires: [icp-cli >= 0.1.0, mops, ic-cdk >= 0.19] tags: [inter-canister, call, architecture, scaling, shared-state, upgrade, multi, sharding] --- # Multi-Canister Architecture ## What This Is Splitting an IC application across multiple canisters for scaling, separation of concerns, or independent upgrade cycles. Each canister has its own state, cycle balance, and upgrade path. Canisters communicate via async inter-canister calls. ## Prerequisites - `icp-cli` >= 0.1.0 (`brew install dfinity/tap/icp-cli`) - For Motoko: `mops` package manager, `core = "2.0.0"` in mops.toml - For Rust: `ic-cdk >= 0.19`, `candid`, `serde`, `ic-stable-structures` - Understanding of `async`/`await` and error handling ## How It Works A caller canister makes a call to a callee canister: the method name, arguments (payload) and attached cycles are packed into a canister request message, which is delivered to the callee after the caller blocks on `await`; the callee executes the request and produces a response; this is packed into a canister response message and delivered to the caller; the caller awakes fron the `await` and continues execution (executes the canister response message). The system may produce a reject response message if e.g. the callee is not found or some resource limit was reached. Calls may be unbounded wait (caller MUST wait until the callee produces a response) or bounded wait (caller MAY get a `SYS_UNKNOWN` response instead of the actual response after the call timeout expires or if the subnet runs low on resources). Request delivery is best-effort: the system may decide to reject any request instead of delivering it. Unbounded wait response (including reject response) delivery is guaranteed: the caller will always learn the outcome of the call. Bounded wait response delivery is best-effort: the caller may receive a system-generated `SYS_UNKNOWN` reject response (unknown outcome) instead of the actual response if the call timed out or some system resource was exhausted, whether or not the request was delivered to the callee. ## When to Use Multi-Canister | Reason | Threshold | |---|---| | Storage limits | Each canister: up to hundreds of GB stable memory + 4GB heap. If your data could exceed heap limits or benefit from partitioning, split storage across canisters. | | Scalable compute | Canisters are single-threaded actors. Sharding load across multiple canisters, potentially across multiple subnets, can significantly improve throughput. | | Separation of concerns | Auth service, content service, payment service as independent units. | | Independent upgrades | Upgrade the payments canister without touching the user canister. | | Access control | Different controllers for different canisters (e.g., DAO controls one, team controls another). | **When NOT to use:** Simple apps with <1GB data. Single-canister is simpler, faster, and avoids inter-canister call overhead. Do not over-architect. ## Mistakes That Break Your Build 1. **Request and response payloads are limited to 2 MB.** Because any canister call may be required to cross subnet boundaries; and cross-subnet (or XNet) messages (the request and response corresponding to each canister call) are inducted in (packaged into) 4 MB blocks; canister request and response payloads are limited to 2 MB. A call with a request payload above 2 MB will fail synchronously; and a response with a payload above 2 MB will trap. Chunk larger payloads into 1 MB chunks (to allow for any encoding overhead) and deliver them over multiple calls (e.g. chunked uploads or byte range queries). 2. **Update methods that make calls are NOT executed atomically.** When an update method makes a call, the code before the `await` is one atomic message execution (i.e. the ingress message or canister request that invoked the update method); and the code after the `await` is a separate atomic message execution (the response to the call). In particular, if the update method traps after the `await`, any mutations before the `await` have already been persisted; and any mutations after the `await` will be rolled back. Design for eventual consistency or use a saga pattern. 3. **Unbounded wait calls may prevent canister upgrades, indefinitely.** Unbounded wait calls may take arbitrarily long to complete: a malicious or incorrect callee may spin indefinitely without producing a response. Canisters cannot be stopped while awaiting responses to outstanding calls. Bounded wait calls avoid this issue by making sure that calls complete in a bounded time, independent of whether the callee responded or not. 4. **Use idempotent APIs. Or provide a separate endpoint to query the outcome of a non-idempotent call.** If a call to a non-idempotent API times out, there must be another way for the caller to learn the outcome of the call (e.g. by attaching a unique ID to the original call and querying for the outcome of the call with that unique ID). Without a way to learn the outcome, when the caller receives a `SYS_UNKNOWN` response it may be unable to decide whether to continue, retry the call or abort. 5. **Calls across subnet boundaries are slower than calls on the same subnet.** Under light subnet load, a call to a canister on the same subnet may complete and its response may be processed by the caller within a single round. The call latency only depends on how frequently the caller and callee are scheduled (which may be multiple times per round). A cross canister call requires 2-3 rounds either way (request delivery and response delivery), plus scheduler latency. 6. **Calls across subnet boundaries have relatively low bandwidth.** Cross-subnet (or XNet) messages are inducted in (packaged into) 4 MB blocks once per round, along with any ingress messages and other XNet messages. Expect multiple MBs of messages to take multiple rounds to deliver, on top of the XNet latency. (Subnet-local messages are routed within the subnet, so they don't suffer from this bandwidth limitation). 7. **Defensive practice: bind `msg_caller()` before `.await` in Rust.** The current ic-cdk executor preserves caller across `.await` points via protected tasks, but capturing it early guards against future executor changes. **Motoko is safe:** `public shared ({ caller }) func` captures `caller` as an immutable binding at function entry. ```rust // Recommended (Rust) — capture caller before await: #[update] async fn do_thing() { let original_caller = ic_cdk::api::msg_caller(); // Defensive: capture before await let _ = some_canister_call().await; let who = original_caller; // Safe } ``` 8. **Not handling rejected calls.** Inter-canister calls can fail (callee trapped, out of cycles, canister stopped). In Motoko use `try/catch`. In Rust, handle the `Result` from `ic_cdk::call`. Unhandled rejections trap your canister. 9. **Deploying canisters in the wrong order.** Canisters with dependencies must be deployed according to their dependencies. Declare `dependencies` in icp.yaml so `icp deploy` orders them correctly. 10. **Forgetting to generate type declarations for each backend canister.** Use language-specific tooling (e.g., `didc` for Candid bindings) to generate declarations for each backend canister individually. 11. **Shared types diverging between canisters.** If canister A expects `{ id: Nat; name: Text }` and canister B sends `{ id: Nat; title: Text }`, the call silently fails or traps. Use a shared types module imported by both canisters. 12. **Canister factory without enough cycles.** Creating a canister requires cycles. The management canister charges for creation and the initial cycle balance. If you do not attach enough cycles, creation fails. 13. **`canister_inspect_message` is not called for inter-canister calls.** It only runs for ingress messages (from external users). Do not rely on it for access control between canisters. Use explicit principal checks instead. 14. **Not setting up `#[init]` and `#[post_upgrade]` in Rust.** Without a `post_upgrade` handler, canister upgrades may behave unexpectedly. Always define both. ## Implementation ### Project Structure ``` my-project/ icp.yaml mops.toml src/ shared/ Types.mo # Shared type definitions user_service/ main.mo # User canister content_service/ main.mo # Content canister frontend/ ... # Frontend assets ``` ### icp.yaml ```yaml defaults: build: packtool: mops sources canisters: user_service: type: motoko main: src/user_service/main.mo content_service: type: motoko main: src/content_service/main.mo dependencies: - user_service frontend: type: assets source: - dist dependencies: - user_service - content_service networks: local: bind: 127.0.0.1:4943 ``` ### Motoko #### src/shared/Types.mo — Shared Types ```motoko module { public type UserId = Principal; public type PostId = Nat; public type UserProfile = { id : UserId; username : Text; created : Int; }; public type Post = { id : PostId; author : UserId; title : Text; body : Text; created : Int; }; public type ServiceError = { #NotFound; #Unauthorized; #AlreadyExists; #InternalError : Text; }; }; ``` #### src/user_service/main.mo — User Canister ```motoko import Map "mo:core/Map"; import Principal "mo:core/Principal"; import Array "mo:core/Array"; import Time "mo:core/Time"; import Result "mo:core/Result"; import Runtime "mo:core/Runtime"; import Types "../shared/Types"; persistent actor { type UserProfile = Types.UserProfile; let users = Map.empty(); // Register a new user public shared ({ caller }) func register(username : Text) : async Result.Result { if (Principal.isAnonymous(caller)) { return #err(#Unauthorized); }; switch (Map.get(users, Principal.compare, caller)) { case (?_existing) { #err(#AlreadyExists) }; case null { let profile : UserProfile = { id = caller; username; created = Time.now(); }; Map.add(users, Principal.compare, caller, profile); #ok(profile) }; } }; // Check if a user exists (called by other canisters) public shared query func isValidUser(userId : Principal) : async Bool { switch (Map.get(users, Principal.compare, userId)) { case (?_) { true }; case null { false }; } }; // Get user profile public shared query func getUser(userId : Principal) : async ?UserProfile { Map.get(users, Principal.compare, userId) }; // Get all users public query func getUsers() : async [UserProfile] { Array.fromIter(Map.values(users)) }; }; ``` #### src/content_service/main.mo — Content Canister (calls User Service) ```motoko import Map "mo:core/Map"; import Nat "mo:core/Nat"; import Array "mo:core/Array"; import Time "mo:core/Time"; import Result "mo:core/Result"; import Runtime "mo:core/Runtime"; import Error "mo:core/Error"; import Principal "mo:core/Principal"; import Types "../shared/Types"; // Import the other canister — name must match icp.yaml canister key import UserService "canister:user_service"; persistent actor { type Post = Types.Post; let posts = Map.empty(); var postCounter : Nat = 0; // Create a post — validates user via inter-canister call public shared ({ caller }) func createPost(title : Text, body : Text) : async Result.Result { // CRITICAL: capture caller BEFORE any await let originalCaller = caller; if (Principal.isAnonymous(originalCaller)) { return #err(#Unauthorized); }; // Inter-canister call to user_service let isValid = try { await UserService.isValidUser(originalCaller) } catch (e : Error.Error) { Runtime.trap("User service unavailable: " # Error.message(e)); }; if (not isValid) { return #err(#Unauthorized); }; let id = postCounter; let post : Post = { id; author = originalCaller; // Use captured caller, NOT caller title; body; created = Time.now(); }; Map.add(posts, Nat.compare, id, post); postCounter += 1; #ok(post) }; // Get all posts public query func getPosts() : async [Post] { Array.fromIter(Map.values(posts)) }; // Get posts by author — with enriched user data public func getPostsWithAuthor(authorId : Principal) : async { user : ?Types.UserProfile; posts : [Post]; } { let userProfile = try { await UserService.getUser(authorId) } catch (_e : Error.Error) { null }; let authorPosts = Array.filter( Array.fromIter(Map.values(posts)), func(p : Post) : Bool { p.author == authorId } ); { user = userProfile; posts = authorPosts } }; // Delete a post — only the author can delete public shared ({ caller }) func deletePost(id : Nat) : async Result.Result<(), Types.ServiceError> { let originalCaller = caller; switch (Map.get(posts, Nat.compare, id)) { case (?post) { if (post.author != originalCaller) { return #err(#Unauthorized); }; ignore Map.delete(posts, Nat.compare, id); #ok(()) }; case null { #err(#NotFound) }; } }; }; ``` ### Rust #### Project Structure (Rust) ``` my-project/ icp.yaml Cargo.toml # workspace src/ user_service/ Cargo.toml src/lib.rs content_service/ Cargo.toml src/lib.rs ``` #### Cargo.toml (workspace root) ```toml [workspace] members = [ "src/user_service", "src/content_service", ] ``` #### icp.yaml (Rust) ```yaml canisters: user_service: type: rust package: user_service candid: src/user_service/user_service.did content_service: type: rust package: content_service candid: src/content_service/content_service.did dependencies: - user_service frontend: type: assets source: - dist dependencies: - user_service - content_service networks: local: bind: 127.0.0.1:4943 ``` #### src/user_service/Cargo.toml ```toml [package] name = "user_service" version = "0.1.0" edition = "2021" [lib] crate-type = ["cdylib"] [dependencies] ic-cdk = "0.19" candid = "0.10" serde = { version = "1", features = ["derive"] } ic-stable-structures = "0.7" ``` #### src/user_service/src/lib.rs ```rust use candid::{CandidType, Deserialize, Principal}; use ic_cdk::{init, post_upgrade, query, update}; use ic_stable_structures::memory_manager::{MemoryId, MemoryManager, VirtualMemory}; use ic_stable_structures::{DefaultMemoryImpl, StableBTreeMap}; use std::cell::RefCell; type Memory = VirtualMemory; #[derive(CandidType, Deserialize, Clone, Debug)] struct UserProfile { id: Principal, username: String, created: i64, } // Stable storage thread_local! { static MEMORY_MANAGER: RefCell> = RefCell::new(MemoryManager::init(DefaultMemoryImpl::default())); static USERS: RefCell, Vec, Memory>> = RefCell::new( StableBTreeMap::init( MEMORY_MANAGER.with(|m| m.borrow().get(MemoryId::new(0))) ) ); } fn principal_to_key(p: &Principal) -> Vec { p.as_slice().to_vec() } fn serialize_profile(profile: &UserProfile) -> Vec { candid::encode_one(profile).unwrap() } fn deserialize_profile(bytes: &[u8]) -> UserProfile { candid::decode_one(bytes).unwrap() } #[init] fn init() {} #[post_upgrade] fn post_upgrade() {} #[update] fn register(username: String) -> Result { let caller = ic_cdk::api::msg_caller(); if caller == Principal::anonymous() { return Err("Unauthorized".to_string()); } let key = principal_to_key(&caller); USERS.with(|users| { if users.borrow().contains_key(&key) { return Err("Already exists".to_string()); } let profile = UserProfile { id: caller, username, created: ic_cdk::api::time() as i64, }; let bytes = serialize_profile(&profile); users.borrow_mut().insert(key, bytes); Ok(profile) }) } #[query] fn is_valid_user(user_id: Principal) -> bool { let key = principal_to_key(&user_id); USERS.with(|users| users.borrow().contains_key(&key)) } #[query] fn get_user(user_id: Principal) -> Option { let key = principal_to_key(&user_id); USERS.with(|users| { users.borrow().get(&key).map(|bytes| deserialize_profile(&bytes)) }) } ic_cdk::export_candid!(); ``` #### src/content_service/Cargo.toml ```toml [package] name = "content_service" version = "0.1.0" edition = "2021" [lib] crate-type = ["cdylib"] [dependencies] ic-cdk = "0.19" candid = "0.10" serde = { version = "1", features = ["derive"] } ic-stable-structures = "0.7" ``` #### src/content_service/src/lib.rs ```rust use candid::{CandidType, Deserialize, Principal}; use ic_cdk::call::Call; use ic_cdk::{init, post_upgrade, query, update}; use ic_stable_structures::memory_manager::{MemoryId, MemoryManager, VirtualMemory}; use ic_stable_structures::{DefaultMemoryImpl, StableBTreeMap, StableCell}; use std::cell::RefCell; type Memory = VirtualMemory; #[derive(CandidType, Deserialize, Clone, Debug)] struct Post { id: u64, author: Principal, title: String, body: String, created: i64, } #[derive(CandidType, Deserialize, Clone, Debug)] struct UserProfile { id: Principal, username: String, created: i64, } // Stable storage -- survives canister upgrades thread_local! { static MEMORY_MANAGER: RefCell> = RefCell::new(MemoryManager::init(DefaultMemoryImpl::default())); // Posts keyed by id (u64 as big-endian bytes) -> candid-encoded Post static POSTS: RefCell, Vec, Memory>> = RefCell::new( StableBTreeMap::init( MEMORY_MANAGER.with(|m| m.borrow().get(MemoryId::new(0))) ) ); // Post counter in stable memory static POST_COUNTER: RefCell> = RefCell::new( StableCell::init( MEMORY_MANAGER.with(|m| m.borrow().get(MemoryId::new(1))), 0u64, ) ); // Store the user_service canister ID (set during init, re-set on upgrade) static USER_SERVICE_ID: RefCell> = RefCell::new(None); } fn post_id_to_key(id: u64) -> Vec { id.to_be_bytes().to_vec() } fn serialize_post(post: &Post) -> Vec { candid::encode_one(post).unwrap() } fn deserialize_post(bytes: &[u8]) -> Post { candid::decode_one(bytes).unwrap() } #[init] fn init(user_service_id: Principal) { USER_SERVICE_ID.with(|id| *id.borrow_mut() = Some(user_service_id)); } #[post_upgrade] fn post_upgrade(user_service_id: Principal) { // Re-set the user_service ID (not stored in stable memory for simplicity, // since it is always passed as an init/upgrade argument) init(user_service_id); } fn get_user_service_id() -> Principal { USER_SERVICE_ID.with(|id| { id.borrow().expect("user_service canister ID not set") }) } // Defensive: capture caller before any await #[update] async fn create_post(title: String, body: String) -> Result { // Capture caller before the await as defensive practice let original_caller = ic_cdk::api::msg_caller(); if original_caller == Principal::anonymous() { return Err("Unauthorized".to_string()); } // Inter-canister call to user_service let user_service = get_user_service_id(); let (is_valid,): (bool,) = Call::unbounded_wait(user_service, "is_valid_user") .with_arg(original_caller) .await .map_err(|e| format!("User service call failed: {:?}", e))? .candid_tuple() .map_err(|e| format!("Failed to decode response: {:?}", e))?; if !is_valid { return Err("User not registered".to_string()); } let id = POST_COUNTER.with(|counter| { let mut counter = counter.borrow_mut(); let id = *counter.get(); counter.set(id + 1); id }); let post = Post { id, author: original_caller, // Use captured caller title, body, created: ic_cdk::api::time() as i64, }; POSTS.with(|posts| { posts.borrow_mut().insert(post_id_to_key(id), serialize_post(&post)); }); Ok(post) } #[query] fn get_posts() -> Vec { POSTS.with(|posts| { posts.borrow().iter() .map(|entry| deserialize_post(&entry.value())) .collect() }) } // Cross-canister enrichment: get posts with author profile #[update] async fn get_posts_with_author(author_id: Principal) -> (Option, Vec) { let user_service = get_user_service_id(); // Call user_service for profile data let user_profile: Option = match Call::unbounded_wait(user_service, "get_user") .with_arg(author_id) .await { Ok(response) => response.candid_tuple::<(Option,)>() .map(|(profile,)| profile) .unwrap_or(None), Err(_) => None, // Handle gracefully if user service is down }; let author_posts = POSTS.with(|posts| { posts.borrow().iter() .map(|entry| deserialize_post(&entry.value())) .filter(|p| p.author == author_id) .collect() }); (user_profile, author_posts) } #[update] async fn delete_post(id: u64) -> Result<(), String> { let original_caller = ic_cdk::api::msg_caller(); POSTS.with(|posts| { let mut posts = posts.borrow_mut(); let key = post_id_to_key(id); match posts.get(&key) { Some(bytes) => { let post = deserialize_post(&bytes); if post.author != original_caller { return Err("Unauthorized".to_string()); } posts.remove(&key); Ok(()) } None => Err("Not found".to_string()), } }) } ic_cdk::export_candid!(); ``` ### Canister Factory Pattern A canister that creates other canisters dynamically. Useful for per-user canisters, sharding, or dynamic scaling. #### Motoko Factory ```motoko import Principal "mo:core/Principal"; import Map "mo:core/Map"; import Array "mo:core/Array"; import Runtime "mo:core/Runtime"; persistent actor Self { type CanisterSettings = { controllers : ?[Principal]; compute_allocation : ?Nat; memory_allocation : ?Nat; freezing_threshold : ?Nat; }; type CreateCanisterResult = { canister_id : Principal; }; // IC Management canister transient let ic : actor { create_canister : shared ({ settings : ?CanisterSettings }) -> async CreateCanisterResult; install_code : shared ({ mode : { #install; #reinstall; #upgrade }; canister_id : Principal; wasm_module : Blob; arg : Blob; }) -> async (); deposit_cycles : shared ({ canister_id : Principal }) -> async (); } = actor "aaaaa-aa"; // Track created canisters let childCanisters = Map.empty(); // owner -> canister // Create a new canister for a user public shared ({ caller }) func createChildCanister(wasmModule : Blob) : async Principal { if (Principal.isAnonymous(caller)) { Runtime.trap("Auth required") }; // Create canister with cycles let createResult = await (with cycles = 1_000_000_000_000) ic.create_canister({ settings = ?{ controllers = ?[Principal.fromActor(Self), caller]; compute_allocation = null; memory_allocation = null; freezing_threshold = null; }; }); let canisterId = createResult.canister_id; // Install code await ic.install_code({ mode = #install; canister_id = canisterId; wasm_module = wasmModule; arg = to_candid (caller); // Pass owner as init arg }); Map.add(childCanisters, Principal.compare, caller, canisterId); canisterId }; // Get a user's canister public query func getChildCanister(owner : Principal) : async ?Principal { Map.get(childCanisters, Principal.compare, owner) }; }; ``` #### Rust Factory ```rust use candid::{CandidType, Deserialize, Nat, Principal, encode_one}; use ic_cdk::management_canister::{ create_canister_with_extra_cycles, install_code, CreateCanisterArgs, InstallCodeArgs, CanisterInstallMode, CanisterSettings, }; use ic_cdk::update; use ic_stable_structures::memory_manager::{MemoryId, MemoryManager, VirtualMemory}; use ic_stable_structures::{DefaultMemoryImpl, StableBTreeMap}; use std::cell::RefCell; type Memory = VirtualMemory; thread_local! { static MEMORY_MANAGER: RefCell> = RefCell::new(MemoryManager::init(DefaultMemoryImpl::default())); // Stable storage: owner principal -> child canister principal (survives upgrades) static CHILD_CANISTERS: RefCell, Vec, Memory>> = RefCell::new( StableBTreeMap::init( MEMORY_MANAGER.with(|m| m.borrow().get(MemoryId::new(0))) ) ); } #[update] async fn create_child_canister(wasm_module: Vec) -> Principal { let caller = ic_cdk::api::msg_caller(); assert_ne!(caller, Principal::anonymous(), "Auth required"); // Create canister let create_args = CreateCanisterArgs { settings: Some(CanisterSettings { controllers: Some(vec![ic_cdk::api::canister_self(), caller]), compute_allocation: None, memory_allocation: None, freezing_threshold: None, reserved_cycles_limit: None, log_visibility: None, wasm_memory_limit: None, wasm_memory_threshold: None, environment_variables: None, }), }; // Attach 1T cycles for the new canister let create_result = create_canister_with_extra_cycles(&create_args, 1_000_000_000_000u128) .await .expect("Failed to create canister"); let canister_id = create_result.canister_id; // Install code let install_args = InstallCodeArgs { mode: CanisterInstallMode::Install, canister_id, wasm_module, arg: encode_one(&caller).unwrap(), // Pass owner as init arg }; install_code(&install_args) .await .expect("Failed to install code"); // Track the child canister CHILD_CANISTERS.with(|canisters| { canisters.borrow_mut().insert( caller.as_slice().to_vec(), canister_id.as_slice().to_vec(), ); }); canister_id } #[ic_cdk::query] fn get_child_canister(owner: Principal) -> Option { CHILD_CANISTERS.with(|canisters| { canisters.borrow().get(&owner.as_slice().to_vec()) .map(|bytes| Principal::from_slice(&bytes)) }) } ``` ## Upgrade Strategy for Multi-Canister Systems ### Ordering 1. Deploy shared dependencies first (e.g., `user_service` before `content_service`). 2. Never change Candid interfaces in a breaking way. Add new fields as `opt` types. 3. Test upgrades locally before mainnet. ### Safe Upgrade Checklist - Never remove or rename fields in existing types shared across canisters. - Add new fields as optional (`?Type` in Motoko, `Option` in Rust). - If a canister's Candid interface changes, upgrade consumers after the provider. - Always have both `#[init]` and `#[post_upgrade]` in Rust canisters. - In Motoko, `persistent actor` handles stable storage automatically. ### Upgrade Commands ```bash # Upgrade canisters in dependency order icp deploy user_service # Rust content_service requires the user_service principal on every upgrade (post_upgrade arg) USER_SERVICE_ID=$(icp canister id user_service) icp deploy content_service --argument "(principal \"$USER_SERVICE_ID\")" npm run build icp deploy frontend ``` ## Deploy & Test ### Local Development ```bash # Start the local replica icp network start -d # Deploy in dependency order icp deploy user_service # content_service (Rust) requires the user_service canister ID as an init argument USER_SERVICE_ID=$(icp canister id user_service) icp deploy content_service --argument "(principal \"$USER_SERVICE_ID\")" # Build and deploy frontend npm run build icp deploy frontend ``` ### Test Inter-Canister Calls (Motoko) ```bash # Register a user PRINCIPAL=$(icp identity principal) icp canister call user_service register "(\"alice\")" # Verify user exists icp canister call user_service isValidUser "(principal \"$PRINCIPAL\")" # Expected: (true) # Create a post (triggers inter-canister call to user_service) icp canister call content_service createPost "(\"Hello World\", \"My first post\")" # Expected: (variant { ok = record { id = 0; author = principal "..."; ... } }) # Get all posts icp canister call content_service getPosts # Expected: (vec { record { id = 0; ... } }) ``` ### Test Inter-Canister Calls (Rust) Rust canisters use snake_case function names: ```bash PRINCIPAL=$(icp identity principal) icp canister call user_service register "(\"alice\")" icp canister call user_service is_valid_user "(principal \"$PRINCIPAL\")" # Expected: (true) # content_service must have been deployed with --argument "(principal \"\")" icp canister call content_service create_post "(\"Hello World\", \"My first post\")" # Expected: (variant { ok = record { id = 0 : nat64; author = principal "..."; ... } }) icp canister call content_service get_posts # Expected: (vec { record { id = 0 : nat64; ... } }) ``` ## Verify It Works ### Verify User Registration ```bash icp canister call user_service register '("testuser")' # Expected: (variant { ok = record { id = principal "..."; username = "testuser"; created = ... } }) ``` ### Verify Inter-Canister Call ```bash # This call should succeed (user is registered) # Motoko: createPost / Rust: create_post icp canister call content_service createPost '("Test Title", "Test Body")' # Expected: (variant { ok = record { ... } }) # Create a new identity that is NOT registered icp identity new unregistered --storage plaintext icp identity use unregistered icp canister call content_service createPost '("Should Fail", "No user")' # Expected: (variant { err = "User not registered" }) # Switch back icp identity use default ``` ### Verify Cross-Canister Query ```bash PRINCIPAL=$(icp identity principal) # Motoko: getPostsWithAuthor / Rust: get_posts_with_author icp canister call content_service getPostsWithAuthor "(principal \"$PRINCIPAL\")" # Expected: (opt record { id = ...; username = "testuser"; ... }, vec { record { ... } }) ``` ### Verify Canister Factory ```bash # Read the wasm file for the child canister # (In practice you'd upload or reference a wasm blob) icp canister call factory createChildCanister '(blob "...")' # Expected: (principal "NEW-CANISTER-ID") icp canister call factory getChildCanister "(principal \"$PRINCIPAL\")" # Expected: (opt principal "NEW-CANISTER-ID") ``` --- --- name: sns-launch title: SNS DAO Launch category: Governance description: "Configure and launch an SNS DAO. Token economics, proposal types, nervous system parameters, and decentralization swap." endpoints: 22 version: 1.9.1 status: stable dependencies: [icrc-ledger, multi-canister] requires: [icp-cli >= 0.1.0, dfx sns extension, NNS neuron with stake] tags: [dao, governance, sns, token, swap, decentralization, proposal, neuron] --- # SNS DAO Launch ## What This Is Service Nervous System (SNS) is the DAO framework for decentralizing individual Internet Computer dapps. Like the NNS governs the IC network itself, an SNS governs a specific dapp -- token holders vote on proposals to upgrade code, manage treasury funds, and set parameters. Launching an SNS transfers canister control from developers to a community-owned governance system through a decentralization swap. ## Prerequisites - `icp-cli` >= 0.1.0 (`brew install dfinity/tap/icp-cli`) - `dfx` with the sns extension (`dfx extension install sns`) for prepare-canisters, validate, and propose - An NNS neuron with sufficient stake to submit proposals (mainnet) - Dapp canisters already deployed and working on mainnet - `sns_init.yaml` configuration file with all parameters defined ## Canister IDs | Canister | Mainnet ID | Purpose | |----------|-----------|---------| | NNS Governance | `rrkah-fqaaa-aaaaa-aaaaq-cai` | Votes on SNS creation proposals | | SNS-W (Wasm Modules) | `qaa6y-5yaaa-aaaaa-aaafa-cai` | Deploys and initializes SNS canisters | | NNS Root | `r7inp-6aaaa-aaaaa-aaabq-cai` | Must be co-controller of dapp before launch | | ICP Ledger | `ryjl3-tyaaa-aaaaa-aaaba-cai` | Handles ICP token transfers during swap | ## SNS Canisters Deployed When an SNS launch succeeds, SNS-W deploys these canisters on an SNS subnet: | Canister | Purpose | |----------|---------| | **Governance** | Proposal submission, voting, neuron management | | **Ledger** | SNS token transfers (ICRC-1 standard) | | **Root** | Sole controller of all dapp canisters post-launch | | **Swap** | Runs the decentralization swap (ICP for SNS tokens) | | **Index** | Transaction indexing for the SNS ledger | | **Archive** | Historical transaction storage | ## Mistakes That Break Your Build 1. **Setting `min_participants` too high.** If you require 500 participants but only 200 show up, the entire swap fails and all ICP is refunded. Start conservative -- most successful SNS launches use 100-200 minimum participants. 2. **Forgetting to add NNS Root as co-controller before proposing.** The launch process requires NNS Root to take over your canisters. If you submit the proposal without adding it first, the launch will fail at stage 6 when SNS Root tries to become sole controller. 3. **Not testing on SNS testflight first.** Going straight to mainnet means discovering configuration issues after your NNS proposal is live. Always deploy a testflight mock SNS on mainnet first to verify governance and upgrade flows. 4. **Token economics that fail NNS review.** The NNS community votes on your proposal. Unreasonable tokenomics (excessive developer allocation, zero vesting, absurd swap caps) will get rejected. Study successful SNS launches (OpenChat, Hot or Not, Kinic) for parameter ranges the community accepts. 5. **Not defining fallback controllers.** If the swap fails, the dapp needs controllers to return control to. Without `fallback_controller_principals`, your dapp could become uncontrollable. 6. **Setting swap duration too short.** Users across time zones need time to participate. Less than 24 hours is risky -- 3-7 days is standard. 7. **Forgetting restricted proposal types during swap.** Six governance proposal types are blocked while the swap runs: `ManageNervousSystemParameters`, `TransferSnsTreasuryFunds`, `MintSnsTokens`, `UpgradeSnsControlledCanister`, `RegisterDappCanisters`, `DeregisterDappCanisters`. Do not plan operations that require these during the swap window. 8. **Developer neurons with zero dissolve delay.** Developers can immediately dump tokens post-launch. Set dissolve delays and vesting periods (12-48 months is typical) to signal long-term commitment. ## Implementation ### SNS Configuration File (sns_init.yaml) This is the single source of truth for all launch parameters. Copy the template from the `dfinity/sns-testing` repo and customize: ```yaml # Note: numeric values are in e8s (1 token = 100_000_000 e8s). Time values are in seconds. # === PROJECT METADATA === name: MyProject description: > A decentralized application for [purpose]. This proposal requests the NNS to create an SNS for MyProject. logo: logo.png url: https://myproject.com # === NNS PROPOSAL TEXT === NnsProposal: title: "Proposal to create an SNS for MyProject" url: "https://forum.dfinity.org/t/myproject-sns-proposal/XXXXX" summary: > This proposal creates an SNS DAO to govern MyProject. Token holders will control upgrades, treasury, and parameters. # === FALLBACK (if swap fails, these principals regain control) === fallback_controller_principals: - YOUR_PRINCIPAL_ID_HERE # === CANISTER IDS TO DECENTRALIZE === dapp_canisters: - BACKEND_CANISTER_ID - FRONTEND_CANISTER_ID # === TOKEN CONFIGURATION === Token: name: MyToken symbol: MYT transaction_fee: 0.0001 tokens logo: token_logo.png # === GOVERNANCE PARAMETERS === Proposals: rejection_fee: 1 token initial_voting_period: 4 days maximum_wait_for_quiet_deadline_extension: 1 day Neurons: minimum_creation_stake: 1 token Voting: minimum_dissolve_delay: 1 month MaximumVotingPowerBonuses: DissolveDelay: duration: 8 years bonus: 100% # 2x voting power at max dissolve Age: duration: 4 years bonus: 25% RewardRate: initial: 2.5% final: 2.5% transition_duration: 0 seconds # === TOKEN DISTRIBUTION === Distribution: Neurons: # Developer allocation (with vesting) - principal: DEVELOPER_PRINCIPAL stake: 2_000_000 tokens memo: 0 dissolve_delay: 6 months vesting_period: 24 months # Seed investors - principal: INVESTOR_PRINCIPAL stake: 500_000 tokens memo: 1 dissolve_delay: 3 months vesting_period: 12 months InitialBalances: treasury: 5_000_000 tokens # Treasury (controlled by DAO) swap: 2_500_000 tokens # Sold during decentralization swap total: 10_000_000 tokens # Must equal sum of all allocations # === DECENTRALIZATION SWAP === Swap: minimum_participants: 100 minimum_direct_participation_icp: 50_000 tokens maximum_direct_participation_icp: 500_000 tokens minimum_participant_icp: 1 token maximum_participant_icp: 25_000 tokens duration: 7 days neurons_fund_participation: true VestingSchedule: events: 5 # Neurons unlock in 5 stages interval: 3 months confirmation_text: > I confirm that I am not a resident of a restricted jurisdiction and I understand the risks of participating in this token swap. restricted_countries: - US - CN ``` ### Launch Process (11 Stages) ``` Stage 1: Developer defines parameters in sns_init.yaml Stage 2: Developer adds NNS Root as co-controller of dapp canisters Stage 3: Developer submits NNS proposal using `dfx sns propose` Stage 4: NNS community votes on the proposal Stage 5: (If adopted) SNS-W deploys uninitialized SNS canisters Stage 6: SNS Root becomes sole controller of dapp canisters Stage 7: SNS-W initializes canisters in pre-decentralization-swap mode Stage 8: 24-hour minimum wait before swap opens Stage 9: Decentralization swap opens (users send ICP, receive SNS neurons) Stage 10: Swap closes (time expires or maximum ICP reached) Stage 11: Finalization (exchange rate set, neurons created, normal mode) ``` ### Motoko Prepare your canister for SNS control. The key requirement is that your canister accepts upgrade proposals from SNS governance: ```motoko import Principal "mo:core/Principal"; import Runtime "mo:core/Runtime"; persistent actor { // SNS Root will be set as sole controller after launch. // Your canister code does not need to change -- SNS governance // controls upgrades via the standard canister management API. // If your canister has admin functions, transition them to // accept SNS governance proposals instead of direct principal checks: var snsGovernanceId : ?Principal = null; // ⚠ SECURITY: This setter MUST be access-controlled. Without a check, any caller // can front-run you and set themselves as governance, permanently locking you out. // Replace DEPLOYER_PRINCIPAL with your actual principal or use an admin list. public shared ({ caller }) func setSnsGovernance(id : Principal) : async () { // Only the deployer (or canister controllers) should call this. assert (Principal.isController(caller)); switch (snsGovernanceId) { case (null) { snsGovernanceId := ?id }; case (?_) { Runtime.trap("SNS governance already set") }; }; }; func requireGovernance(caller : Principal) { switch (snsGovernanceId) { case (?gov) { if (caller != gov) { Runtime.trap("Only SNS governance can call this") }; }; case (null) { Runtime.trap("SNS governance not configured") }; }; }; // Admin functions become governance-gated: public shared ({ caller }) func updateConfig(newFee : Nat) : async () { requireGovernance(caller); // ... apply config change }; }; ``` ### Rust ```rust use candid::{CandidType, Deserialize, Principal}; use ic_cdk::{init, post_upgrade, query, update}; use std::cell::RefCell; #[derive(CandidType, Deserialize, Clone)] struct Config { sns_governance: Option, } thread_local! { // ⚠ STATE LOSS: RefCell in thread_local! is HEAP storage — it is wiped on every // canister upgrade. In production, use ic-stable-structures (StableCell or StableBTreeMap) // to persist this across upgrades. At minimum, implement #[pre_upgrade]/#[post_upgrade] // hooks to serialize/deserialize this data. Without that, an upgrade erases your // governance config and locks out SNS control. static CONFIG: RefCell = RefCell::new(Config { sns_governance: None, }); } fn require_governance(caller: Principal) { CONFIG.with(|c| { let config = c.borrow(); match config.sns_governance { Some(gov) if gov == caller => (), Some(_) => ic_cdk::trap("Only SNS governance can call this"), None => ic_cdk::trap("SNS governance not configured"), } }); } // ⚠ SECURITY: This setter MUST be access-controlled. Without a check, any caller // can front-run you and set themselves as governance, permanently locking you out. #[update] fn set_sns_governance(id: Principal) { // Only canister controllers should call this. if !ic_cdk::api::is_controller(&ic_cdk::api::msg_caller()) { ic_cdk::trap("Only canister controllers can set governance"); } CONFIG.with(|c| { let mut config = c.borrow_mut(); if config.sns_governance.is_some() { ic_cdk::trap("SNS governance already set"); } config.sns_governance = Some(id); }); } #[update] fn update_config(new_fee: u64) { let caller = ic_cdk::api::msg_caller(); require_governance(caller); // ... apply config change } ``` **Cargo.toml dependencies:** ```toml [package] name = "sns_dapp_backend" version = "0.1.0" edition = "2021" [lib] crate-type = ["cdylib"] [dependencies] candid = "0.10" ic-cdk = "0.19" serde = { version = "1", features = ["derive"] } ``` ## Deploy & Test ### Local Testing with sns-testing ```bash # Clone the SNS testing repository git clone https://github.com/dfinity/sns-testing.git cd sns-testing # WARNING: starting a fresh network wipes all local canister data. Only use for fresh setup. icp network start -d # Deploy NNS canisters locally (includes governance, ledger, SNS-W) # Note: Use the sns-testing repo's setup scripts for NNS + SNS-W canister installation. # See https://github.com/dfinity/sns-testing for current instructions. # Deploy your dapp canisters icp deploy my_backend icp deploy my_frontend # Deploy a testflight SNS locally using your config # Use the sns-testing repo tooling to deploy a local testflight SNS. # See sns-testing README for the current testflight workflow. ``` ### Mainnet Testflight (Mock SNS) ```bash # Deploy a mock SNS on mainnet to test governance flows # This does NOT do a real swap -- it creates a mock SNS you control # Use the sns-testing repo tooling for mainnet testflight deployment. # See https://github.com/dfinity/sns-testing for the current testflight workflow. # Test submitting proposals, voting, and upgrading via SNS governance ``` ### Mainnet Launch (Real) ```bash # Step 1: Add NNS Root as co-controller of each dapp canister # Requires dfx sns extension: `dfx extension install sns` dfx sns prepare-canisters add-nns-root BACKEND_CANISTER_ID --network ic dfx sns prepare-canisters add-nns-root FRONTEND_CANISTER_ID --network ic # Step 2: Validate your config locally before submitting dfx sns init-config-file validate # Or review the rendered proposal by inspecting the yaml output carefully. # You can also test the full flow on a local replica first (see Local Testing above). # Step 3: Submit the proposal (THIS IS IRREVERSIBLE — double-check your config) dfx sns propose --network ic --neuron $NEURON_ID sns_init.yaml ``` ## Verify It Works ### After local testflight deployment: ```bash # List deployed SNS canisters icp canister id sns_governance icp canister id sns_ledger icp canister id sns_root icp canister id sns_swap # Verify SNS governance is operational icp canister call sns_governance get_nervous_system_parameters '()' # Expected: returns the governance parameters you configured # Verify token distribution icp canister call sns_ledger icrc1_total_supply '()' # Expected: matches your total token supply # Verify dapp canister controllers changed icp canister status BACKEND_CANISTER_ID # Expected: controller is the SNS Root canister, NOT your principal # Test an SNS proposal (upgrade your canister via governance) icp canister call sns_governance manage_neuron '(record { ... })' # Expected: proposal created, can be voted on ``` ### After mainnet launch: ```bash # Check swap status icp canister call SNS_SWAP_ID get_state '()' -e ic # Expected: shows swap status, participation count, ICP raised # Check SNS governance icp canister call SNS_GOVERNANCE_ID get_nervous_system_parameters '()' -e ic # Expected: returns your configured parameters # Verify dapp controller is SNS Root icp canister status BACKEND_CANISTER_ID -e ic # Expected: single controller = SNS Root canister ID ``` --- --- name: stable-memory title: "Stable Memory & Upgrades" category: Architecture description: "Manage canister state across upgrades. Stable structures, pre/post upgrade hooks, and memory-mapped data." endpoints: 6 version: 2.0.2 status: stable dependencies: [] requires: [icp-cli >= 0.1.0] tags: [storage, persistence, upgrade, memory, stable-structures, heap, state] --- # Stable Memory & Canister Upgrades ## What This Is Stable memory is persistent storage on Internet Computer that survives canister upgrades. Heap memory (regular variables) is wiped on every upgrade. Any data you care about MUST be in stable memory, or it will be lost the next time the canister is deployed. ## Prerequisites - icp-cli >= 0.1.0 (`brew install dfinity/tap/icp-cli`) - For Motoko: mops with `core = "2.0.0"` in mops.toml - For Rust: `ic-stable-structures = "0.7"` in Cargo.toml ## Canister IDs No external canister dependencies. Stable memory is a local canister feature. ## Mistakes That Break Your Build 1. **Using `thread_local! { RefCell }` for user data (Rust)** -- This is heap memory. It is wiped on every canister upgrade. All user data, balances, settings stored this way will vanish after `icp deploy`. Use `StableBTreeMap` instead. 2. **Forgetting `#[post_upgrade]` handler (Rust)** -- Without a `post_upgrade` function, the canister may silently reset state or behave unexpectedly after upgrade. Always define both `#[init]` and `#[post_upgrade]`. 3. **Using `stable` keyword in persistent actors (Motoko)** -- In mo:core `persistent actor`, all `let` and `var` declarations are automatically stable. Writing `stable let` produces warning M0218 and `stable var` is redundant. Just use `let` and `var`. 4. **Confusing heap memory limits with stable memory limits (Rust)** -- Heap (Wasm linear) memory is limited to 4GB. Stable memory can grow up to hundreds of GB (the subnet storage limit). The real danger: if you use `pre_upgrade`/`post_upgrade` hooks to serialize heap data to stable memory and deserialize it back, you are limited by the 4GB heap AND by the instruction limit for upgrade hooks. Large datasets will trap during upgrade, bricking the canister. The solution is to use stable structures (`StableBTreeMap`, `StableCell`, etc.) that read/write directly to stable memory, bypassing the heap entirely. Use `MemoryManager` to partition stable memory into virtual memories so multiple structures can coexist without overwriting each other. 5. **Changing record field types between upgrades (Motoko)** -- Altering the type of a persistent field (e.g., `Nat` to `Int`, or renaming a record field) will trap on upgrade and data is unrecoverable. Only ADD new optional fields. Never remove or rename existing ones. 6. **Serializing large data in pre_upgrade (Rust)** -- `pre_upgrade` has a fixed instruction limit. If you serialize a large HashMap to stable memory in pre_upgrade, it will hit the limit and trap, bricking the canister. Use `StableBTreeMap` which writes directly to stable memory and needs no serialization step. 7. **Using `actor { }` instead of `persistent actor { }` (Motoko)** -- Plain `actor` in mo:core requires explicit `stable` annotations and pre/post_upgrade hooks. `persistent actor` makes everything stable by default. Always use `persistent actor`. ## Implementation ### Motoko With mo:core 2.0, `persistent actor` makes stable storage trivial. All `let` and `var` declarations inside the actor body are automatically persisted across upgrades. ```motoko import Map "mo:core/Map"; import List "mo:core/List"; import Nat "mo:core/Nat"; import Text "mo:core/Text"; import Time "mo:core/Time"; persistent actor { // Types -- must be inside actor body type User = { id : Nat; name : Text; created : Int; }; // These survive upgrades automatically -- no "stable" keyword needed let users = Map.empty(); var userCounter : Nat = 0; let tags = List.empty(); // Transient data -- reset to initial value on every upgrade transient var requestCount : Nat = 0; public func addUser(name : Text) : async Nat { let id = userCounter; Map.add(users, Nat.compare, id, { id; name; created = Time.now(); }); userCounter += 1; requestCount += 1; id }; public query func getUser(id : Nat) : async ?User { Map.get(users, Nat.compare, id) }; public query func getUserCount() : async Nat { Map.size(users) }; // requestCount resets to 0 after every upgrade public query func getRequestCount() : async Nat { requestCount }; } ``` Key rules for Motoko persistent actors: - `let` for Map, List, Set, Queue -- auto-persisted, no serialization - `var` for simple values (Nat, Text, Bool) -- auto-persisted - `transient var` for caches, counters that should reset on upgrade - NO `pre_upgrade` / `post_upgrade` needed -- the runtime handles it - NO `stable` keyword -- it is redundant and produces warnings #### mops.toml ```toml [package] name = "my-project" version = "0.1.0" [dependencies] core = "2.0.0" ``` ### Rust Rust canisters use `ic-stable-structures` for persistent storage. The `MemoryManager` partitions stable memory (up to hundreds of GB, limited by subnet storage) into virtual memories, each backing a different data structure. #### Cargo.toml ```toml [package] name = "stable_memory_backend" version = "0.1.0" edition = "2021" [lib] crate-type = ["cdylib"] [dependencies] ic-cdk = "0.19" ic-stable-structures = "0.7" candid = "0.10" serde = { version = "1", features = ["derive"] } ``` #### Single Stable Structure (Simple Case) ```rust use ic_stable_structures::{ memory_manager::{MemoryId, MemoryManager, VirtualMemory}, DefaultMemoryImpl, StableBTreeMap, }; use ic_cdk::{init, post_upgrade, query, update}; use candid::{CandidType, Deserialize}; use std::cell::RefCell; type Memory = VirtualMemory; // Stable storage -- survives upgrades thread_local! { static MEMORY_MANAGER: RefCell> = RefCell::new(MemoryManager::init(DefaultMemoryImpl::default())); static USERS: RefCell, Memory>> = RefCell::new(StableBTreeMap::init( MEMORY_MANAGER.with(|m| m.borrow().get(MemoryId::new(0))) )); // Counter stored in stable memory via StableCell static COUNTER: RefCell> = RefCell::new(ic_stable_structures::StableCell::init( MEMORY_MANAGER.with(|m| m.borrow().get(MemoryId::new(1))), 0u64, )); } #[derive(CandidType, Deserialize, Clone)] struct User { id: u64, name: String, created: u64, } #[init] fn init() { // Any one-time initialization } #[post_upgrade] fn post_upgrade() { // Stable structures auto-restore -- no deserialization needed // Re-init timers or other transient state here } #[update] fn add_user(name: String) -> u64 { let id = COUNTER.with(|c| { let mut cell = c.borrow_mut(); let current = *cell.get(); cell.set(current + 1); current }); let user = User { id, name, created: ic_cdk::api::time(), }; let serialized = candid::encode_one(&user).expect("Failed to serialize user"); USERS.with(|users| { users.borrow_mut().insert(id, serialized); }); id } #[query] fn get_user(id: u64) -> Option { USERS.with(|users| { users.borrow().get(&id).and_then(|bytes| { candid::decode_one(&bytes).ok() }) }) } #[query] fn get_user_count() -> u64 { USERS.with(|users| users.borrow().len()) } ``` #### Multiple Stable Structures with MemoryManager ```rust use ic_stable_structures::{ memory_manager::{MemoryId, MemoryManager, VirtualMemory}, DefaultMemoryImpl, StableBTreeMap, StableCell, StableLog, }; use std::cell::RefCell; type Memory = VirtualMemory; // Each structure gets its own MemoryId -- NEVER reuse IDs const USERS_MEM_ID: MemoryId = MemoryId::new(0); const POSTS_MEM_ID: MemoryId = MemoryId::new(1); const COUNTER_MEM_ID: MemoryId = MemoryId::new(2); const LOG_INDEX_MEM_ID: MemoryId = MemoryId::new(3); const LOG_DATA_MEM_ID: MemoryId = MemoryId::new(4); thread_local! { static MEMORY_MANAGER: RefCell> = RefCell::new(MemoryManager::init(DefaultMemoryImpl::default())); static USERS: RefCell, Memory>> = RefCell::new(StableBTreeMap::init( MEMORY_MANAGER.with(|m| m.borrow().get(USERS_MEM_ID)) )); static POSTS: RefCell, Memory>> = RefCell::new(StableBTreeMap::init( MEMORY_MANAGER.with(|m| m.borrow().get(POSTS_MEM_ID)) )); static COUNTER: RefCell> = RefCell::new(StableCell::init( MEMORY_MANAGER.with(|m| m.borrow().get(COUNTER_MEM_ID)), 0u64, )); static AUDIT_LOG: RefCell, Memory, Memory>> = RefCell::new(StableLog::init( MEMORY_MANAGER.with(|m| m.borrow().get(LOG_INDEX_MEM_ID)), MEMORY_MANAGER.with(|m| m.borrow().get(LOG_DATA_MEM_ID)), )); } ``` Key rules for Rust stable structures: - `MemoryManager` partitions stable memory -- each structure gets a unique `MemoryId` - NEVER reuse a `MemoryId` for two different structures -- they will corrupt each other - `StableBTreeMap` keys must implement `Storable` + `Ord`, values must implement `Storable` - For complex types, serialize to `Vec` with candid or serde - `StableCell` for single values (counters, config) - `StableLog` for append-only logs (needs two memory regions: index + data) - `thread_local! { RefCell> }` is the correct pattern -- the RefCell wraps the stable structure, not a heap HashMap - No `pre_upgrade`/`post_upgrade` serialization needed -- data is already in stable memory ## Deploy & Test ### Motoko: Verify Persistence Across Upgrades ```bash # Start local replica icp network start -d # Deploy icp deploy backend # Add data icp canister call backend addUser '("Alice")' # Expected: (0 : nat) icp canister call backend addUser '("Bob")' # Expected: (1 : nat) # Verify data exists icp canister call backend getUserCount '()' # Expected: (2 : nat) icp canister call backend getUser '(0)' # Expected: (opt record { id = 0 : nat; name = "Alice"; created = ... }) # Now upgrade the canister (simulates code change + redeploy) icp deploy backend # Verify data survived the upgrade icp canister call backend getUserCount '()' # Expected: (2 : nat) -- STILL 2, not 0 icp canister call backend getUser '(1)' # Expected: (opt record { id = 1 : nat; name = "Bob"; created = ... }) ``` ### Rust: Verify Persistence Across Upgrades ```bash icp network start -d icp deploy backend icp canister call backend add_user '("Alice")' # Expected: (0 : nat64) icp canister call backend get_user_count '()' # Expected: (1 : nat64) # Upgrade icp deploy backend # Verify persistence icp canister call backend get_user_count '()' # Expected: (1 : nat64) -- data survived icp canister call backend get_user '(0)' # Expected: (opt record { id = 0 : nat64; name = "Alice"; created = ... }) ``` ## Verify It Works The definitive test for stable memory: data survives upgrade. ```bash # 1. Deploy and add data icp deploy backend icp canister call backend addUser '("TestUser")' # 2. Record the count icp canister call backend getUserCount '()' # Note the number # 3. Upgrade (redeploy) icp deploy backend # 4. Check count again -- must be identical icp canister call backend getUserCount '()' # Must match step 2 # 5. Verify transient data DID reset icp canister call backend getRequestCount '()' # Expected: (0 : nat) -- transient var resets on upgrade ``` If the count drops to 0 after step 3, your data is NOT in stable memory. Review your storage declarations. --- --- name: vetkd title: vetKeys category: Security description: "Implement on-chain privacy using vetKeys. Key derivation, encryption/decryption flows, and access control patterns." endpoints: 5 version: 1.0.2 status: beta dependencies: [internet-identity] requires: [icp-cli >= 0.1.0] tags: [encryption, decryption, key-derivation, threshold, privacy, secret] --- # vetKeys (Verifiable Encrypted Threshold Keys) > **Note:** vetKeys is a newer feature of the IC. The `ic-vetkeys` Rust crate and `@dfinity/vetkeys` > npm package are published, but the APIs may still change over time. > Pin your dependency versions and check the [DFINITY forum](https://forum.dfinity.org) for any migration guides after upgrades. ## What This Is vetKeys (verifiably encrypted threshold keys) bring on-chain privacy to the IC via the **vetKD** protocol: secure, on-demand key derivation so that a public blockchain can hold and work with secret data. Keys are **verifiable** (users can check correctness and lack of tampering), **encrypted** (derived keys are encrypted under a user-supplied transport key—no node or canister ever sees the raw key), and **threshold** (a quorum of subnet nodes cooperates to derive keys; no single party has the master key). A canister requests a derived key from the subnet’s threshold infrastructure, receives it encrypted under the client’s transport public key, and only the client decrypts it locally. This unlocks decentralized key management (DKMS), encrypted on-chain storage, private messaging, identity-based encryption (IBE), timelock encryption, threshold BLS, and verifiable randomness—use cases. ## Prerequisites - `icp-cli` >= 0.1.0 (`brew install dfinity/tap/icp-cli`) - Rust: `ic-vetkeys = "0.6"` ([crates.io](https://crates.io/crates/ic-vetkeys)) - Motoko: Use the raw management canister approach shown below - Frontend: `@dfinity/vetkeys` v0.4.0 (`npm install @dfinity/vetkeys`) - For local testing: `icp network start` creates a local test key automatically ## Canister IDs | Canister | ID | Purpose | |----------|-----|---------| | Management Canister | `aaaaa-aa` | Exposes `vetkd_public_key` and `vetkd_derive_key` system APIs | | Chain-key testing canister | `vrqyr-saaaa-aaaan-qzn4q-cai` | **Testing only:** fake vetKD implementation to test key derivation without paying production API fees. Insecure, do not use in production. | The management canister is not a real canister, it is a system-level API endpoint. Calls to `aaaaa-aa` are routed by the system to the vetKD-enabled subnet that holds the master key specified in `key_id`; that subnet's nodes run the threshold key derivation. Your canister can call from any subnet. **Testing canister:** The [chain-key testing canister](https://github.com/dfinity/chainkey-testing-canister) is deployed on mainnet and provides a fake vetKD implementation (hard-coded keys, no threshold) so you can exercise key derivation without production cycle costs. Use key name `insecure_test_key_1`. **Insecure, for testing only:** never use it in production or with sensitive data. You can also deploy your own instance from the repo. ### Master Key Names and API Fees Any canister on the IC can use any available master key regardless of which subnet the canister or the key resides on; the management canister routes calls to the subnet that holds the master key. | Key name | Environment | Purpose | Cycles (approx.) | Notes | |----------------|-------------|-------------------|--------------------|-------| | `dfx_test_key` | Local | Development only | — | Created automatically with `icp network start` | | `test_key_1` | Mainnet | Testing | 10_000_000_000 | Subnet fuqsr (backed up on 2fq7c) | | `key_1` | Mainnet | Production | 26_153_846_153 | Subnet pzp6e (backed up on uzr34) | Fees depend on the **subnet where the master key resides** (and its size), not on the calling canister's subnet. If the canister may be blackholed or used by other canisters, send **more cycles** than the current cost so that future subnet size increases do not cause calls to fail; unused cycles are refunded. See [vetKD API — API fees](https://docs.internetcomputer.org/building-apps/network-features/vetkeys/api#api-fees) for current USD estimates. ## Key Concepts - **vetKey**: Key material derived deterministically from `(canister_id, context, input)`. Same inputs always produce the same key. Neither the canister nor any subnet node ever sees the raw key, as it is encrypted under the client's transport key until decrypted locally. - **Transport key**: An ephemeral key pair generated by the client. The public key is sent to the canister so the IC can encrypt the derived key for delivery. Only the client holding the corresponding private key can decrypt the result. - **Context**: A domain separator blob. Isolates derived subkeys per use case (e.g. per feature or key purpose) and prevents key collisions within the same canister. Think of it as a namespace. - **Input**: Application-defined data that identifies which key to derive (e.g. user principal, file ID, chat room ID). It is sent in plaintext to the management canister. Use it only as an identifier, never for secret data. - **IBE (Identity-Based Encryption)**: A scheme where you encrypt to an identity (e.g. a principal) using a derived public key. vetKeys enables IBE on the IC: anyone can encrypt to a principal using the canister's derived public key; only that principal can obtain the matching vetKey and decrypt. ## Mistakes That Break Your Build 1. **Not pinning dependency versions.** The `ic-vetkeys` crate and `@dfinity/vetkeys` npm package are published, but the APIs may still change in new releases. Pin your versions and re-test after upgrades. If something stops working after an upgrade, consult the relevant change notes to understand what happened. 2. **Reusing transport keys across sessions.** Each session must generate a fresh transport key pair. The Rust and TypeScript libraries include support for generating keys safely; use them if at all possible. 3. **Using raw `vetkd_derive_key` output as an encryption key.** The output is an encrypted blob. You must decrypt it with the transport secret to get the vetKey (raw key material). What you do next depends on your use case: for example, you might derive a symmetric key (e.g. for AES) via `toDerivedKeyMaterial()` or the equivalent. Do not use the decrypted bytes directly as an AES key. Other uses (IBE decryption, signing, etc.) consume the vetKey in their own way; the libraries document the right pattern for each. 4. **Confusing vetKD with traditional public-key crypto.** There are no static key pairs per user. Keys are derived on-demand from the subnet's threshold master key (via the vetKD protocol). The same (canister, context, input) always yields the same derived key. 5. **Putting secret data in the `input` field.** The input is sent to the management canister in plaintext. It is a key identifier, not encrypted payload. Use it for IDs (principal, document ID), never for the actual secret data. 6. **Forgetting that `vetkd_derive_key` is an async inter-canister call.** It costs cycles and requires `await`. Capture `caller` before the await as defensive practice. 7. **Using `context` inconsistently.** If the backend uses `b"my_app_v1"` as context but the frontend verification uses `b"my_app"`, the derived keys will not match and decryption will silently fail. 8. **Not attaching enough cycles to `vetkd_derive_key`.** `vetkd_derive_key` consumes cycles; `vetkd_public_key` does not. For derive_key, `key_1` costs ~26B cycles and `test_key_1` costs ~10B cycles. ## System API (Candid) The vetKD API lets canisters request vetKeys derived by the threshold protocol. Derivation is **deterministic**: the same inputs always produce the same key, so keys can be retrieved reliably. Different inputs yield different keys—canisters can derive an unlimited number of unique keys. Summary below; full spec: [vetKD API](https://docs.internetcomputer.org/building-apps/network-features/vetkeys/api) and the [IC interface specification](https://internetcomputer.org/docs/current/references/ic-interface-spec#ic-vetkd_derive_key). ### vetkd_public_key Returns a public key used to **verify** keys derived with `vetkd_derive_key`. With an empty context you get the canister-level master public key; with a non-empty context you get the derived subkey for that context. In IBE, this public key lets anyone encrypt to an identity (e.g. a principal); only the holder of that identity can later obtain the matching vetKey and decrypt—no prior key exchange or recipient presence required. ```candid vetkd_public_key : (record { canister_id : opt canister_id; context : blob; key_id : record { curve : vetkd_curve; name : text }; }) -> (record { public_key : blob }) ``` - `canister_id`: Optional. If omitted (`null`), the public key for the **calling canister** is returned; if provided, the key for that canister is returned. - `context`: Domain separator which has the same meaning as in `vetkd_derive_key`. Ensures keys are derived in a specific context and avoids collisions across apps or use cases. - `key_id.curve`: `bls12_381_g2` (only supported curve). - `key_id.name`: Master key name: `dfx_test_key` (local), `test_key_1`, or `key_1`. You can also derive this public key **offline** from the known mainnet master public key; see "Offline Public Key Derivation" below. ### vetkd_derive_key Derives key material for the given (context, input) and returns it **encrypted** under the recipient's transport public key. Only the holder of the transport secret can decrypt. The decrypted material is then used according to your use case (e.g. via `toDerivedKeyMaterial()` for symmetric keys, or for IBE decryption). ```candid vetkd_derive_key : (record { input : blob; context : blob; transport_public_key : blob; key_id : record { curve : vetkd_curve; name : text }; }) -> (record { encrypted_key : blob }) ``` - `input`: Arbitrary data used as the key identifier—different inputs yield different derived keys. Does not need to be random; sent in plaintext to the management canister. - `context`: Domain separator; must match the context used when obtaining the public key (e.g. for verification or IBE). - `transport_public_key`: The recipient's public key; the derived key is encrypted under this for secure delivery. - Returns: `encrypted_key`. Decrypt with the transport secret to get the raw vetKey, then use it as required (e.g. derive a symmetric key; do not use raw bytes directly as an AES key). Master key names and cycle costs are in **Master Key Names and API Fees** under Canister IDs. ## Implementation ### Rust **Cargo.toml:** ```toml [dependencies] candid = "0.10" ic-cdk = "0.19" serde = { version = "1", features = ["derive"] } serde_bytes = "0.11" # High-level library (recommended) — source: https://github.com/dfinity/vetkeys ic-vetkeys = "0.6" ic-stable-structures = "0.7" ``` **Using ic-vetkeys library (recommended):** ```rust use candid::Principal; use ic_cdk::update; use ic_stable_structures::memory_manager::{MemoryId, MemoryManager, VirtualMemory}; use ic_stable_structures::DefaultMemoryImpl; use ic_vetkeys::key_manager::KeyManager; use ic_vetkeys::types::{AccessRights, VetKDCurve, VetKDKeyId}; // KeyManager is generic over an AccessControl type — AccessRights is the default. // It uses stable memory for persistent storage of access control state. thread_local! { static MEMORY_MANAGER: std::cell::RefCell> = std::cell::RefCell::new(MemoryManager::init(DefaultMemoryImpl::default())); static KEY_MANAGER: std::cell::RefCell>> = std::cell::RefCell::new(None); } #[ic_cdk::init] fn init() { let key_id = VetKDKeyId { curve: VetKDCurve::Bls12381G2, name: "key_1".to_string(), // "dfx_test_key" for local, "test_key_1" for testing }; MEMORY_MANAGER.with(|mm| { let mm = mm.borrow(); KEY_MANAGER.with(|km| { *km.borrow_mut() = Some(KeyManager::init( "my_app_v1", // domain separator key_id, mm.get(MemoryId::new(0)), // config memory mm.get(MemoryId::new(1)), // access control memory mm.get(MemoryId::new(2)), // shared keys memory )); }); }); } #[update] async fn get_encrypted_vetkey(subkey_id: Vec, transport_public_key: Vec) -> Vec { let caller = ic_cdk::caller(); // Capture BEFORE await let future = KEY_MANAGER.with(|km| { let km = km.borrow(); let km = km.as_ref().expect("not initialized"); km.get_encrypted_vetkey(caller, subkey_id, transport_public_key) .expect("access denied") }); future.await } #[update] async fn get_vetkey_verification_key() -> Vec { let future = KEY_MANAGER.with(|km| { let km = km.borrow(); let km = km.as_ref().expect("not initialized"); km.get_vetkey_verification_key() }); future.await } ``` **Calling management canister directly (lower level):** ```rust use candid::{CandidType, Deserialize, Principal}; use ic_cdk::update; #[derive(CandidType, Deserialize)] struct VetKdKeyId { curve: VetKdCurve, name: String, } #[derive(CandidType, Deserialize)] enum VetKdCurve { #[serde(rename = "bls12_381_g2")] Bls12381G2, } #[derive(CandidType)] struct VetKdPublicKeyRequest { canister_id: Option, context: Vec, key_id: VetKdKeyId, } #[derive(CandidType, Deserialize)] struct VetKdPublicKeyResponse { public_key: Vec, } #[derive(CandidType)] struct VetKdDeriveKeyRequest { input: Vec, context: Vec, transport_public_key: Vec, key_id: VetKdKeyId, } #[derive(CandidType, Deserialize)] struct VetKdDeriveKeyResponse { encrypted_key: Vec, } const CONTEXT: &[u8] = b"my_app_v1"; fn key_id() -> VetKdKeyId { VetKdKeyId { curve: VetKdCurve::Bls12381G2, // Key names: "dfx_test_key" for local, "test_key_1" for mainnet testing, "key_1" for production name: "key_1".to_string(), } } #[update] async fn vetkd_public_key() -> Vec { let request = VetKdPublicKeyRequest { canister_id: None, // defaults to this canister context: CONTEXT.to_vec(), key_id: key_id(), }; // vetkd_public_key does not require cycles (unlike vetkd_derive_key). let (response,): (VetKdPublicKeyResponse,) = ic_cdk::api::call::call( Principal::management_canister(), // aaaaa-aa "vetkd_public_key", (request,), ) .await .expect("vetkd_public_key call failed"); response.public_key } #[update] async fn vetkd_derive_key(transport_public_key: Vec) -> Vec { let caller = ic_cdk::caller(); // MUST capture before await let request = VetKdDeriveKeyRequest { input: caller.as_slice().to_vec(), // derive key specific to this caller context: CONTEXT.to_vec(), transport_public_key, key_id: key_id(), }; // key_1 costs ~26B cycles, test_key_1 costs ~10B cycles. let (response,): (VetKdDeriveKeyResponse,) = ic_cdk::api::call::call_with_payment128( Principal::management_canister(), "vetkd_derive_key", (request,), 26_000_000_000, // cycles for key_1 (use 10_000_000_000 for test_key_1) ) .await .expect("vetkd_derive_key call failed"); response.encrypted_key } ``` ### Motoko **mops.toml:** ```toml [package] name = "my-vetkd-app" version = "0.1.0" [dependencies] core = "2.0.0" ``` **Using the management canister directly:** ```motoko import Blob "mo:core/Blob"; import Principal "mo:core/Principal"; import Text "mo:core/Text"; persistent actor { type VetKdCurve = { #bls12_381_g2 }; type VetKdKeyId = { curve : VetKdCurve; name : Text; }; type VetKdPublicKeyRequest = { canister_id : ?Principal; context : Blob; key_id : VetKdKeyId; }; type VetKdPublicKeyResponse = { public_key : Blob; }; type VetKdDeriveKeyRequest = { input : Blob; context : Blob; transport_public_key : Blob; key_id : VetKdKeyId; }; type VetKdDeriveKeyResponse = { encrypted_key : Blob; }; let managementCanister : actor { vetkd_public_key : VetKdPublicKeyRequest -> async VetKdPublicKeyResponse; vetkd_derive_key : VetKdDeriveKeyRequest -> async VetKdDeriveKeyResponse; } = actor "aaaaa-aa"; let context : Blob = Text.encodeUtf8("my_app_v1"); // Key names: "dfx_test_key" for local, "test_key_1" for mainnet testing, "key_1" for production func keyId() : VetKdKeyId { { curve = #bls12_381_g2; name = "key_1" } }; public shared func getPublicKey() : async Blob { // vetkd_public_key does not require cycles (unlike vetkd_derive_key). let response = await managementCanister.vetkd_public_key({ canister_id = null; context; key_id = keyId(); }); response.public_key }; public shared ({ caller }) func deriveKey(transportPublicKey : Blob) : async Blob { // caller is captured here, before the await. vetkd_derive_key requires cycles. let response = await (with cycles = 26_000_000_000) managementCanister.vetkd_derive_key({ input = Principal.toBlob(caller); context; transport_public_key = transportPublicKey; key_id = keyId(); }); response.encrypted_key }; }; ``` ### Frontend (TypeScript) The frontend generates a transport key pair, sends the public half to the canister, receives the encrypted derived key, decrypts it with the transport secret to get the vetKey (raw key material), then derives a symmetric key from that material (e.g. via `toDerivedKeyMaterial()`) for AES or other use. ```typescript import { TransportSecretKey, DerivedPublicKey, EncryptedVetKey } from "@dfinity/vetkeys"; // 1. Generate a transport secret key (BLS12-381) const seed = crypto.getRandomValues(new Uint8Array(32)); const transportSecretKey = TransportSecretKey.fromSeed(seed); const transportPublicKey = transportSecretKey.publicKey(); // 2. Request encrypted vetkey and verification key from your canister const [encryptedKeyBytes, verificationKeyBytes] = await Promise.all([ backendActor.get_encrypted_vetkey(subkeyId, transportPublicKey), backendActor.get_vetkey_verification_key(), ]); // 3. Deserialize and decrypt const verificationKey = DerivedPublicKey.deserialize(new Uint8Array(verificationKeyBytes)); const encryptedVetKey = EncryptedVetKey.deserialize(new Uint8Array(encryptedKeyBytes)); const vetKey = encryptedVetKey.decryptAndVerify( transportSecretKey, verificationKey, new Uint8Array(subkeyId), ); // 4. Derive a symmetric key for AES-GCM const aesKeyMaterial = vetKey.toDerivedKeyMaterial(); const aesKey = await crypto.subtle.importKey( "raw", aesKeyMaterial.data.slice(0, 32), // 256-bit AES key { name: "AES-GCM" }, false, ["encrypt", "decrypt"], ); // 5. Encrypt const iv = crypto.getRandomValues(new Uint8Array(12)); const ciphertext = await crypto.subtle.encrypt( { name: "AES-GCM", iv }, aesKey, new TextEncoder().encode("secret message"), ); // 6. Decrypt const plaintext = await crypto.subtle.decrypt( { name: "AES-GCM", iv }, aesKey, ciphertext, ); ``` The `@dfinity/vetkeys` package also provides higher-level abstractions via sub-paths: - **`@dfinity/vetkeys/key_manager`** -- `KeyManager` and `DefaultKeyManagerClient` for managing access-controlled keys - **`@dfinity/vetkeys/encrypted_maps`** -- `EncryptedMaps` and `DefaultEncryptedMapsClient` for encrypted key-value storage These mirror the Rust `KeyManager` and `EncryptedMaps` types and handle the transport key flow automatically. ### Offline Public Key Derivation You can derive public keys offline (without any canister calls) from the known mainnet master public key for a given key name (e.g. `key_1`). This is useful for IBE: you derive the canister's public key for your context, then encrypt to an identity (e.g. a principal) without the recipient or the canister being online. **Rust:** ```rust use ic_vetkeys::{MasterPublicKey, DerivedPublicKey}; // Start from the known mainnet master public key for key_1 let master_key = MasterPublicKey::for_mainnet_key("key_1") .expect("unknown key name"); // Derive the canister-level key let canister_key = master_key.derive_canister_key(canister_id.as_slice()); // Derive a sub-key for a specific context/identity let derived_key: DerivedPublicKey = canister_key.derive_sub_key(b"my_app_v1"); // Use derived_key for IBE encryption — no canister call needed ``` **TypeScript:** ```typescript import { MasterPublicKey, DerivedPublicKey } from "@dfinity/vetkeys"; // Start from the known mainnet master public key const masterKey = MasterPublicKey.productionKey(); // Derive the canister-level key const canisterKey = masterKey.deriveCanisterKey(canisterId); // Derive a sub-key for a specific context/identity const derivedKey: DerivedPublicKey = canisterKey.deriveSubKey( new TextEncoder().encode("my_app_v1"), ); // Use derivedKey for IBE encryption — no canister call needed ``` ### Identity-Based Encryption (IBE) IBE lets you encrypt to an identity (e.g. a principal) using only the canister's derived public key—the recipient does not need to be online or have registered a key beforehand. The recipient later authenticates to the canister, obtains their vetKey (derived for that identity) via `vetkd_derive_key`, and decrypts locally. **TypeScript:** ```typescript import { TransportSecretKey, DerivedPublicKey, EncryptedVetKey, IbeCiphertext, IbeIdentity, IbeSeed, } from "@dfinity/vetkeys"; // --- Encrypt (sender side, no canister call needed) --- // Derive the recipient's public key offline (see "Offline Public Key Derivation" above) const recipientIdentity = IbeIdentity.fromBytes(recipientPrincipalBytes); const seed = IbeSeed.random(); const plaintext = new TextEncoder().encode("secret message"); const ciphertext = IbeCiphertext.encrypt(derivedPublicKey, recipientIdentity, plaintext, seed); const serialized = ciphertext.serialize(); // store or transmit this // --- Decrypt (recipient side, requires canister call to get vetKey) --- // 1. Get the vetKey (same flow as the Frontend section above) const transportSecretKey = TransportSecretKey.fromSeed(crypto.getRandomValues(new Uint8Array(32))); const [encryptedKeyBytes, verificationKeyBytes] = await Promise.all([ backendActor.get_encrypted_vetkey(subkeyId, transportSecretKey.publicKey()), backendActor.get_vetkey_verification_key(), ]); const verificationKey = DerivedPublicKey.deserialize(new Uint8Array(verificationKeyBytes)); const encryptedVetKey = EncryptedVetKey.deserialize(new Uint8Array(encryptedKeyBytes)); const vetKey = encryptedVetKey.decryptAndVerify( transportSecretKey, verificationKey, new Uint8Array(subkeyId), ); // 2. Decrypt the IBE ciphertext const deserialized = IbeCiphertext.deserialize(serialized); const decrypted = deserialized.decrypt(vetKey); // decrypted is Uint8Array containing "secret message" ``` **Rust (off-chain client or test):** ```rust use ic_vetkeys::{ DerivedPublicKey, IbeCiphertext, IbeIdentity, IbeSeed, VetKey, }; // --- Encrypt --- let identity = IbeIdentity::from_bytes(recipient_principal.as_slice()); let seed = IbeSeed::new(&mut rand::rng()); let plaintext = b"secret message"; let ciphertext = IbeCiphertext::encrypt( &derived_public_key, &identity, plaintext, &seed, ); let serialized = ciphertext.serialize(); // --- Decrypt (after obtaining the VetKey) --- let deserialized = IbeCiphertext::deserialize(&serialized) .expect("invalid ciphertext"); let decrypted = deserialized.decrypt(&vet_key) .expect("decryption failed"); // decrypted == b"secret message" ``` ### Higher-Level Abstractions: KeyManager & EncryptedMaps Both the Rust crate and TypeScript package provide two higher-level modules that handle the transport key flow, access control, and encrypted storage for you: - **`KeyManager`** (Rust) / **`KeyManager`** (TS) — Manages access-controlled vetKeys with stable storage. The canister enforces who may request which keys; the library handles derivation requests, user rights (`Read`, `ReadWrite`, `ReadWriteManage`), and key sharing between principals. - **`EncryptedMaps`** (Rust) / **`EncryptedMaps`** (TS) — Builds on KeyManager to provide an encrypted key-value store. Each map is access-controlled and encrypted under a derived vetKey. Encryption and decryption of values are handled on the client (frontend) using vetKeys; the canister only stores ciphertext. In Rust, these live in `ic_vetkeys::key_manager` and `ic_vetkeys::encrypted_maps`. In TypeScript, import from `@dfinity/vetkeys/key_manager` and `@dfinity/vetkeys/encrypted_maps`. See the [vetkeys repository](https://github.com/dfinity/vetkeys) for full examples. ## Deploy & Test ### Local Development ```bash # Start local replica (creates dfx_test_key automatically) icp network start -d # Deploy your canister icp deploy backend # Test public key retrieval icp canister call backend getPublicKey '()' # Returns: (blob "...") -- the vetKD public key # For derive_key, you need a transport public key (generated by frontend) # Test with a dummy 48-byte blob: icp canister call backend deriveKey '(blob "\00\01\02\03\04\05\06\07\08\09\0a\0b\0c\0d\0e\0f\10\11\12\13\14\15\16\17\18\19\1a\1b\1c\1d\1e\1f\20\21\22\23\24\25\26\27\28\29\2a\2b\2c\2d\2e\2f")' ``` ### Mainnet ```bash # Deploy to mainnet icp deploy backend -e ic # Use test_key_1 for initial testing, key_1 for production # Make sure your canister code references the correct key name ``` ## Verify It Works ```bash # 1. Verify public key is returned (non-empty blob) icp canister call backend getPublicKey '()' # Expected: (blob "\ab\cd\ef...") -- 48+ bytes of BLS public key data # 2. Verify derive_key returns encrypted key (non-empty blob) icp canister call backend deriveKey '(blob "\00\01...")' # Expected: (blob "\12\34\56...") -- encrypted key material # 3. Verify determinism: same (caller, context, input) and same transport key produce same encrypted_key # Call deriveKey twice with the same identity and transport key # Expected: identical encrypted_key blobs both times # 4. Verify isolation: different callers get different keys icp identity new test-user-1 --storage-mode=plaintext icp identity new test-user-2 --storage-mode=plaintext icp identity default test-user-1 icp canister call backend deriveKey '(blob "\00\01...")' # Note the output icp identity default test-user-2 icp canister call backend deriveKey '(blob "\00\01...")' # Expected: DIFFERENT encrypted_key (different caller = different derived key) # 5. Frontend integration test # Open the frontend, trigger encryption/decryption # Verify: encrypted data can be decrypted by the same user # Verify: encrypted data CANNOT be decrypted by a different user ``` --- --- name: wallet title: Cycles Wallet Management category: Infrastructure description: "Create, fund, and manage cycles wallets. Top-up canisters, check balances, and automate cycle management." endpoints: 7 version: 1.4.2 status: stable dependencies: [] requires: [icp-cli >= 0.1.0] tags: [cycles, wallet, topup, canister, funding, management, icp] --- # Cycles & Canister Management ## What This Is Cycles are the computation fuel for canisters on Internet Computer. Every canister operation (execution, storage, messaging) burns cycles. When a canister runs out of cycles, it freezes and eventually gets deleted. 1 trillion cycles (1T) costs approximately 1 USD equivalent in ICP (the exact rate is set by the NNS and fluctuates with ICP price via the CMC). **Note:** icp-cli uses the **cycles ledger** (`um5iw-rqaaa-aaaaq-qaaba-cai`) by default. The cycles ledger is a single canister that tracks cycle balances for all principals, similar to a token ledger. Commands like `icp cycles balance`, `icp cycles mint`, and `icp canister top-up` go through the cycles ledger. There is no legacy wallet concept in icp-cli. The programmatic patterns below (accepting cycles, creating canisters via management canister) remain the same regardless of which funding mechanism is used. ## Prerequisites - icp-cli >= 0.1.0 (install: `brew install dfinity/tap/icp-cli`) - An identity with ICP balance for converting to cycles (mainnet) - For local development: cycles are unlimited by default ## Canister IDs | Service | Canister ID | Purpose | |---------|------------|---------| | Cycles Minting Canister (CMC) | `rkp4c-7iaaa-aaaaa-aaaca-cai` | Converts ICP to cycles, creates canisters | | NNS Ledger (ICP) | `ryjl3-tyaaa-aaaaa-aaaba-cai` | ICP token transfers | | Management Canister | `aaaaa-aa` | Canister lifecycle (create, install, stop, delete, status) | The Management Canister (`aaaaa-aa`) is a virtual canister -- it does not exist on a specific subnet but is handled by every subnet's execution layer. ## Mistakes That Break Your Build 1. **Running out of cycles silently freezes the canister** -- There is no warning. The canister stops responding to all calls. If cycles are not topped up before the freezing threshold, the canister and all its data will be permanently deleted. Set a freezing threshold and monitor balances. 2. **Not setting freezing_threshold** -- Default is 30 days. If your canister burns cycles fast (high traffic, large stable memory), 30 days may not be enough warning. Set it higher for production canisters. The freezing threshold defines how many seconds worth of idle cycles the canister must retain before it freezes. 3. **Confusing local vs mainnet cycles** -- Local replicas give canisters virtually unlimited cycles. Code that works locally may fail on mainnet because the canister has insufficient cycles. Always test with realistic cycle amounts before mainnet deployment. 4. **Sending cycles to the wrong canister** -- Cycles sent to a canister cannot be retrieved. There is no refund mechanism for cycles transferred to the wrong principal. Double-check the canister ID before topping up. 5. **Forgetting to set the canister controller** -- If you lose the controller identity, you permanently lose the ability to upgrade, top up, or manage the canister. Always add a backup controller. Use `icp canister update-settings --add-controller PRINCIPAL` to add one. 6. **Using ExperimentalCycles in mo:core** -- In mo:core 2.0, the module is renamed to `Cycles`. `import ExperimentalCycles "mo:base/ExperimentalCycles"` will fail. Use `import Cycles "mo:core/Cycles"`. 7. **Not accounting for the transfer fee when converting ICP to cycles** -- Converting ICP to cycles via the CMC requires an ICP transfer to the CMC first. That transfer costs 10000 e8s fee. If you send your exact ICP balance, the transfer will fail due to insufficient funds after the fee. ## Implementation ### Motoko #### Checking and Accepting Cycles ```motoko import Cycles "mo:core/Cycles"; import Nat "mo:core/Nat"; import Principal "mo:core/Principal"; import Runtime "mo:core/Runtime"; persistent actor { // Check this canister's cycle balance public query func getBalance() : async Nat { Cycles.balance() }; // Accept cycles sent with a call (for "tip jar" or payment patterns) public func deposit() : async Nat { let available = Cycles.available(); if (available == 0) { Runtime.trap("No cycles sent with this call") }; let accepted = Cycles.accept(available); accepted }; // Send cycles to another canister via inter-canister call public func topUpCanister(target : Principal) : async () { let targetActor = actor (Principal.toText(target)) : actor { deposit_cycles : shared () -> async (); }; // Attach 1T cycles to the call await (with cycles = 1_000_000_000_000) targetActor.deposit_cycles(); }; } ``` #### Creating a Canister Programmatically ```motoko import Cycles "mo:core/Cycles"; import Principal "mo:core/Principal"; persistent actor Self { type CanisterId = { canister_id : Principal }; type CreateCanisterSettings = { controllers : ?[Principal]; compute_allocation : ?Nat; memory_allocation : ?Nat; freezing_threshold : ?Nat; }; // Management canister interface let ic = actor ("aaaaa-aa") : actor { create_canister : shared { settings : ?CreateCanisterSettings } -> async CanisterId; canister_status : shared { canister_id : Principal } -> async { status : { #running; #stopping; #stopped }; memory_size : Nat; cycles : Nat; settings : CreateCanisterSettings; module_hash : ?Blob; }; deposit_cycles : shared { canister_id : Principal } -> async (); stop_canister : shared { canister_id : Principal } -> async (); delete_canister : shared { canister_id : Principal } -> async (); }; // Create a new canister with 1T cycles public func createNewCanister() : async Principal { let result = await (with cycles = 1_000_000_000_000) ic.create_canister({ settings = ?{ controllers = ?[Principal.fromActor(Self)]; compute_allocation = null; memory_allocation = null; freezing_threshold = ?2_592_000; // 30 days in seconds }; }); result.canister_id }; // Check a canister's status and cycle balance public func checkStatus(canisterId : Principal) : async Nat { let status = await ic.canister_status({ canister_id = canisterId }); status.cycles }; // Top up another canister public func topUp(canisterId : Principal, amount : Nat) : async () { await (with cycles = amount) ic.deposit_cycles({ canister_id = canisterId }); }; } ``` ### Rust #### Cargo.toml Dependencies ```toml [package] name = "wallet_backend" version = "0.1.0" edition = "2021" [lib] crate-type = ["cdylib"] [dependencies] ic-cdk = "0.19" candid = "0.10" serde = { version = "1", features = ["derive"] } ``` #### Checking Balance and Accepting Cycles ```rust use ic_cdk::{query, update}; use candid::Nat; #[query] fn get_balance() -> Nat { Nat::from(ic_cdk::api::canister_cycle_balance()) } #[update] fn deposit() -> Nat { let available = ic_cdk::api::msg_cycles_available(); if available == 0 { ic_cdk::trap("No cycles sent with this call"); } let accepted = ic_cdk::api::msg_cycles_accept(available); Nat::from(accepted) } ``` #### Creating and Managing Canisters ```rust use candid::{CandidType, Deserialize, Nat, Principal}; use ic_cdk::update; use ic_cdk::management_canister::{ create_canister_with_extra_cycles, canister_status, deposit_cycles, stop_canister, delete_canister, CreateCanisterArgs, CanisterStatusArgs, DepositCyclesArgs, StopCanisterArgs, DeleteCanisterArgs, CanisterSettings, CanisterStatusResult, }; #[update] async fn create_new_canister() -> Principal { let caller = ic_cdk::api::canister_self(); // capture canister's own principal let user = ic_cdk::api::msg_caller(); // capture caller before await let settings = CanisterSettings { controllers: Some(vec![caller, user]), compute_allocation: None, memory_allocation: None, freezing_threshold: Some(Nat::from(2_592_000u64)), // 30 days reserved_cycles_limit: None, log_visibility: None, wasm_memory_limit: None, wasm_memory_threshold: None, environment_variables: None, }; let arg = CreateCanisterArgs { settings: Some(settings), }; // Send 1T cycles with the create call let result = create_canister_with_extra_cycles(&arg, 1_000_000_000_000u128) .await .expect("Failed to create canister"); result.canister_id } #[update] async fn check_status(canister_id: Principal) -> CanisterStatusResult { canister_status(&CanisterStatusArgs { canister_id }) .await .expect("Failed to get canister status") } #[update] async fn top_up(canister_id: Principal, amount: u128) { deposit_cycles(&DepositCyclesArgs { canister_id }, amount) .await .expect("Failed to deposit cycles"); } #[update] async fn stop_and_delete(canister_id: Principal) { stop_canister(&StopCanisterArgs { canister_id }) .await .expect("Failed to stop canister"); delete_canister(&DeleteCanisterArgs { canister_id }) .await .expect("Failed to delete canister"); } ``` ## Deploy & Test ### Check Cycle Balance ```bash # Check your canister's cycle balance icp canister status backend # Look for "Balance:" line in output # Check balance on mainnet icp canister status backend -e ic # Check any canister by ID icp canister status ryjl3-tyaaa-aaaaa-aaaba-cai -e ic ``` ### Top Up a Canister ```bash # Top up with cycles from the cycles ledger (local) icp canister top-up backend --amount 1000000000000 # Adds 1T cycles to the backend canister # Top up on mainnet icp canister top-up backend --amount 1000000000000 -e ic # Convert ICP to cycles and top up in one step (mainnet) icp cycles mint --amount 1.0 -e ic icp canister top-up backend --amount 1000000000000 -e ic ``` ### Create a Canister via icp ```bash # Create an empty canister (local) icp canister create my_canister # Create on mainnet with specific cycles icp canister create my_canister -e ic --with-cycles 2000000000000 # Add a backup controller icp canister update-settings my_canister --add-controller BACKUP_PRINCIPAL_HERE ``` ### Set Freezing Threshold ```bash # Set freezing threshold to 90 days (in seconds: 90 * 24 * 60 * 60 = 7776000) icp canister update-settings backend --freezing-threshold 7776000 # Mainnet icp canister update-settings backend --freezing-threshold 7776000 -e ic ``` ## Verify It Works ```bash # 1. Deploy a canister and check its status icp network start -d icp deploy backend icp canister status backend # Expected output includes: # Status: Running # Balance: 3_100_000_000_000 Cycles (local default, varies) # Freezing threshold: 2_592_000 # 2. Check balance programmatically (if you added getBalance) icp canister call backend getBalance '()' # Expected: a large nat value, e.g. (3_100_000_000_000 : nat) # 3. Verify controllers icp canister info backend # Expected: Shows your principal as controller # 4. On mainnet: verify cycles balance is not zero icp canister status backend -e ic # If Balance shows 0, the canister will freeze. Top up immediately. # 5. Verify freezing threshold was set icp canister status backend # Look for "Freezing threshold:" -- should match what you set ``` ### Monitoring Checklist for Production ```bash # Run periodically for mainnet canisters: CANISTER_ID="your-canister-id-here" # Check balance icp canister status $CANISTER_ID -e ic # Warning thresholds: # < 5T cycles -- top up soon # < 1T cycles -- urgent, canister may freeze # 0 cycles -- canister is frozen, data at risk of deletion ```