From 29d71a671ed203766df1dfb2fe13410c81efdaf2 Mon Sep 17 00:00:00 2001 From: adpthegreat Date: Tue, 24 Sep 2024 02:25:30 +0100 Subject: [PATCH 1/6] - Ran grammarly over the text - Updated to use clearer and simple language - Updated code snippets , removed top level async , used solana helpers functions where necessary - Renamed variables to ix to instruction --- .../state-compression/compressed-nfts.md | 389 +++++++++--------- 1 file changed, 185 insertions(+), 204 deletions(-) diff --git a/content/courses/state-compression/compressed-nfts.md b/content/courses/state-compression/compressed-nfts.md index 461bec325..0b6241d9c 100644 --- a/content/courses/state-compression/compressed-nfts.md +++ b/content/courses/state-compression/compressed-nfts.md @@ -13,108 +13,98 @@ description: ## Summary - **Compressed NFTs (cNFTs)** use **State Compression** to hash NFT data and - store the hash onchain in an account using a **concurrent Merkle tree** - structure + store the hash onchain in an account using a **Concurrent Merkle Tree** + structure. - The cNFT data hash can’t be used to infer the cNFT data, but it can be used to - **verify** if the cNFT data you’re seeing is correct + **verify** if the cNFT data you see is correct. - Supporting RPC providers **index** cNFT data offchain when the cNFT is minted - so that you can use the **Read API** to access the data + so that you can use the **Read API** to access the data. - The **Metaplex Bubblegum program** is an abstraction on top of the **State - Compression** program that enables you to more simply create, mint, and manage - cNFT collections + Compression** program that enables you to create, mint, and manage + cNFT collections. ## Lesson Compressed NFTs (cNFTs) are exactly what their name suggests: NFTs whose structure takes up less account storage than traditional NFTs. Compressed NFTs -leverage a concept called **State Compression** to store data in a way that +use a technique called [**State Compression**](https://solana.com/docs/advanced/state-compression) to store data in a way that drastically reduces costs. -Solana’s transaction costs are so cheap that most users never think about how -expensive minting NFTs can be at scale. The cost to set up and mint 1 million -traditional NFTs is approximately 24,000 SOL. By comparison, cNFTs can be -structured to where the same setup and mint costs 10 SOL or less. That means -anyone using NFTs at scale could cut costs by more than 1000x by using cNFTs -over traditional NFTs. +Solana’s transaction costs are so low that most users don't think about the cost of minting NFTs at scale. However, minting 1 million +NFTs can cost around 24,000 SOL. In contrast, cNFTs can be +set up and minted for 10 SOL or less, meaning costs can be reduced +by over 1000x by using cNFTs. -However, cNFTs can be tricky to work with. Eventually, the tooling required to -work with them will be sufficiently abstracted from the underlying technology -that the developer experience between traditional NFTs and cNFTs will be -negligible. But for now, you’ll still need to understand the low level puzzle -pieces, so let’s dig in! +While cNFTs are much cheaper, they can be more complex to work with. Over time, tools will evolve to make using cNFTs as easy as traditional NFTs. + +But for now, you’ll still need to understand the technical details, so let’s dive in! ### A theoretical overview of cNFTs -Most of the costs associated with traditional NFTs come down to account storage -space. Compressed NFTs use a concept called State Compression to store data in -the blockchain’s cheaper **ledger state**, using more expensive account space +Most of the costs of traditional NFTs comes from the need for account storage +space. Compressed NFTs (cNFTs) use State Compression to store data in +the blockchain’s cheaper **ledger state**, using the more expensive account storage space only to store a “fingerprint”, or **hash**, of the data. This hash allows you to -cryptographically verify that data has not been tampered with. +cryptographically verify that data has not been altered. -To both store hashes and enable verification, we use a special binary tree -structure known as a **concurrent Merkle tree**. This tree structure lets us -hash data together in a deterministic way to compute a single, final hash that -gets stored onchain. This final hash is significantly smaller in size than all -the original data combined, hence the “compression.” The steps to this process +To store and verify these hashes , we use a special binary tree +structure known as a [**Concurrent Merkle Tree**](https://developers.metaplex.com/bubblegum/concurrent-merkle-trees). This tree structure combines data through hashing in a determininistic way, eventually creating a single smaller hash called a "root hash" that's stored onchain, this process ccompresses the data, hence the “compression.” The steps to this process are: 1. Take any piece of data -2. Create a hash of this data -3. Store this hash as a “leaf” at the bottom of the tree -4. Each leaf pair is then hashed together, creating a “branch” -5. Each branch is then hashed together +2. Create a hash of the data +3. Store the hash as a “leaf” at the bottom of the tree +4. Hash pairs of leaves together, to create “branches” +5. Hash branches together 6. Continually climb the tree and hash adjacent branches together 7. Once at the top of the tree, a final ”root hash” is produced -8. Store the root hash onchain as a verifiable proof of the data within each - leaf +8. Store the root hash onchain as a verifiable proof that the data within each + leaf has not changed 9. Anyone wanting to verify that the data they have matches the “source of truth” can go through the same process and compare the final hash without having to store all the data onchain -One problem not addressed in the above is how to make data available if it can’t -be fetched from an account. Since this hashing process occurs onchain, all the +One challenge in the process above is how to make data available if its not directly stored in an account. Since this hashing happens onchain, all the data exists in the ledger state and could theoretically be retrieved from the -original transaction by replaying the entire chain state from origin. However, -it’s much more straightforward (though still complicated) to have an **indexer** +original transaction by replaying the entire chain state from the origin. However, +it’s easier (though still complicated) to have an **indexer** track and index this data as the transactions occur. This ensures there is an offchain “cache” of the data that anyone can access and subsequently verify against the onchain root hash. -This process is _very complex_. We’ll cover some of the key concepts below but -don’t worry if you don’t understand it right away. We’ll talk more theory in the -state compression lesson and focus primarily on application to NFTs in this -lesson. You’ll be able to work with cNFTs by the end of this lesson even if you -don’t fully understand every piece of the state compression puzzle. +This process is _complicated_. We’ll cover some of the key concepts below but +don’t worry if you don’t understand it right away. We’ll cover more theory in the +state compression lesson and focus on applying it to NFTs. By the end of this lesson, you’ll be able to work with cNFTs even if you +don’t fully understand the whole state compression process. #### Concurrent Merkle trees -A **Merkle tree** is a binary tree structure represented by a single hash. Every +A **Merkle tree** is a binary tree structure that represents data as a single hash. Every leaf node in the structure is a hash of its inner data while every branch is a hash of its child leaf hashes. In turn, branches are also hashed together until eventually one final root hash remains. -Any modification to leaf data changes the root hash. This causes an issue when -multiple transactions in the same slot are attempting to modify leaf data. Since +Any modification to leaf data changes the root hash. This causes a problem when +multiple transactions in the same slot try to update the tree at the same time. Since these transactions must execute in series, all but the first will fail since the root hash and proof passed in will have been invalidated by the first transaction to be executed. -A **concurrent Merkle tree** is a Merkle tree that stores a secure changelog of +A **Concurrent Merkle Tree** is a Merkle tree that stores a secure changelog of the most recent changes along with their root hash and the proof to derive it. -When multiple transactions in the same slot try to modify leaf data, the +When multiple transactions in the same slot try to update the tree, the changelog can be used as a source of truth to allow for concurrent changes to be made to the tree. When working with a concurrent Merkle tree, there are three variables that -determine the size of the tree, the cost to create the tree, and the number of -concurrent changes that can be made to the tree: +determine the size, the cost to create the tree, and how many number changes that can be made to the tree at once: 1. Max depth 2. Max buffer size 3. Canopy depth -The **max depth** is the maximum number of hops to get from any leaf to the root -of the tree. Since Merkle trees are binary trees, every leaf is connected only +The **max depth** is the maximum number of layers or "hops" to get from any leaf to the root +of the tree. Since its a type of binary tree, every leaf is connected only to one other leaf. Max depth can then logically be used to calculate the number of nodes for the tree with `2 ^ maxDepth`. @@ -122,12 +112,11 @@ The **max buffer size** is effectively the maximum number of concurrent changes that you can make to a tree within a single slot with the root hash still being valid. -The **canopy depth** is the number of proof nodes that are stored onchain for -any given proof path. Verifying any leaf requires the complete proof path for +The **canopy depth** is the number of proof nodes that are stored onchain for verification. To verify a leaf you need the complete proof path for the tree. The complete proof path is made up of one proof node for every “layer” -of the tree, i.e. a max depth of 14 means there are 14 proof nodes. Every proof -node adds 32 bytes to a transaction, so large trees would quickly exceed the -maximum transaction size limit without caching proof nodes onchain. +of the tree, i.e. a max depth of 14 means there are 14 proof nodes. The larger the tree, the more proof nodes there are, and each +node adds 32 bytes to a transaction, which can quickly exceed the +maximum transaction size limit , so caching proof nodes onchain helps manaage this. Each of these three values, max depth, max buffer size, and canopy depth, comes with a tradeoff. Increasing the value of any of these values increases the size @@ -144,73 +133,68 @@ concurrent writes do you need. #### SPL State Compression and Noop Programs -The SPL State Compression Program exists to make the above process repeatable -and composable throughout the Solana ecosystem. It provides instructions for +The SPL State Compression Program simplifies and standardizes the process of using merkle trees across the Solana ecosystem. It provides key functionalities for initializing Merkle trees, managing tree leafs (i.e. add, update, remove data), and verifying leaf data. -The State Compression Program also leverages a separate “no op” program whose -primary purpose is to make leaf data easier to index by logging it to the ledger -state. +The State Compression Program also leverages a separate “no op” (No Operation) program whose +primary purpose log leaf data to the ledger state making it easier to index leaf data. #### Use the Ledger State for storage -The Solana ledger is a list of entries containing signed transactions. In -theory, this can be traced back to the genesis block. This effectively means any +The Solana ledger is a continuous record of signed transactions, +theoretically traceable all the way back to the genesis block. This means any data that has ever been put into a transaction exists in the ledger. When you want to store compressed data, you pass it to the State Compression -program where it gets hashed and emitted as an “event” to the Noop program. The +program where it gets hashed and emitted as an “event” to the no op program. The hash is then stored in the corresponding concurrent Merkle tree. Since the data -passed through a transaction and even exists on the Noop program logs, it will -forever exist on the ledger state. +passed through a transaction and even exists on the no op program logs, it will +exist on the ledger state permanently. #### Index data for easy lookup -Under normal conditions, you would typically access onchain data by fetching the -appropriate account. When using state compression, however, it’s not so -straightforward. +Normally, you would access onchain data by fetching it from an account. +However, when using state compression, its a bit more complicated . -As mentioned above, the data now exists in the ledger state rather than in an -account. The easiest place to find the full data is in the logs of the Noop +Instead of being stored in an account, compressed data resides in the ledger state. +The easiest way to access the full data is through the logs of the no op instruction, but while this data will in a sense exist in the ledger state forever, it will likely be inaccessible through validators after a certain period of time. To save space and be more performant, validators don’t retain every transaction back to the genesis block. The specific amount of time you’ll be able to access -the Noop instruction logs related to your data will vary based on the validator, +the no op instruction logs related to your data will vary based on the validator, but eventually you’ll lose access to it if you’re relying directly on instruction logs. -Technically, you _can_ replay transaction state back to the genesis block but -the average team isn’t going to do that, and it certainly won’t be performant. -Instead, you should use an indexer that will observe the events sent to the Noop -program and store the relevant data off chain. That way you don’t need to worry +Technically, it is possible to replay the entire transaction history back to the genesis block but +this is impractical and unperformant for most teams. +Instead, a better approach is using an indexer that will observe the events sent to the no op +program and store the relevant data offchain. That way you don’t need to worry about old data becoming inaccessible. ### Create a cNFT Collection -With the theoretical background out of the way, let’s turn our attention to the +Now that we've covered the theory, let’s focus on the main point of this lesson: how to create a cNFT collection. Fortunately, you can use tools created by Solana Foundation, the Solana developer community, and Metaplex to simplify the process. Specifically, we’ll be using the `@solana/spl-account-compression` SDK, the Metaplex Bubblegum -program, and the Bubblegum program’s corresponding TS SDK +program, and the Bubblegum program’s corresponding typescript SDK `@metaplex-foundation/mpl-bugglegum`. #### Prepare metadata -Prior to starting, you’ll prepare your NFT metadata similarly to how you would -if you were using a Candy Machine. At its core, an NFT is simply a token with -metadata that follows the NFT standard. In other words, it should be shaped -something like this: +Before starting, you’ll need to prepare your NFT metadata similar to how you would with a Candy Machine. An NFT is simply a token with +metadata that follows the NFT standard. In other words, heres an example of how it should look like: ```json { @@ -236,18 +220,17 @@ something like this: } ``` -Depending on your use case, you may be able to generate this dynamically or you -might want to have a JSON file prepared for each cNFT beforehand. You’ll also +Depending on your project, you may be able to generate this metadata dynamically or you have a separate JSON file prepared for each cNFT . You’ll also need any other assets referenced by the JSON, such as the `image` url shown in the example above. #### Create Collection NFT If you want your cNFTs to be part of a collection, you’ll need to create a -Collection NFT **before** you start minting cNFTs. This is a traditional NFT -that acts as the reference binding your cNFTs together into a single collection. -You can create this NFT using the `@metaplex-foundation/js` library. Just make -sure you set `isCollection` to `true`. +Collection NFT **before** you start minting cNFTs. A Collection NFT is a traditional NFT +that serves as the reference binding your cNFTs together into a single collection. +To create a Collection NFT you can use the `@metaplex-foundation/js` library. Just make +sure you set `isCollection` field to `true`. ```typescript const collectionNft = await metaplex.nfts().create({ @@ -265,10 +248,8 @@ const collectionNft = await metaplex.nfts().create({ #### Create Merkle tree Account -Now we start to deviate from the process you would use when creating traditional -NFTs. The onchain storage mechanism you use for state compression is an account -representing a concurrent Merkle tree. This Merkle tree account belongs to the -SPL State Compression program. Before you can do anything related to cNFTs, you +When creating compressed NFTs (cNFTs), you need to setup an an account for the Concurrent Merkle Tree. This Merkle tree account belongs to the +SPL State Compression program. Before you can do any cNFT related actions, you need to create an empty Merkle tree account with the appropriate size. The variables impacting the size of the account are: @@ -277,7 +258,7 @@ The variables impacting the size of the account are: 2. Max buffer size 3. Canopy depth -The first two variables must be chosen from an existing set of valid pairs. The +The Max depth and Max buffer size must be selected from an existing set of valid pairs. The table below shows the valid pairs along with the number of cNFTs that can be created with those values. @@ -318,7 +299,7 @@ NFTs you need the tree to hold, then choose one of the options for max buffer size based on the traffic you expect you’ll need to support. Next, choose the canopy depth. Increasing the canopy depth increases the -composability of your cNFTs. Any time your or another developer’s code attempts +composability of your cNFTs. Any time you or another developer’s code attempts to verify a cNFT down the road, the code will have to pass in as many proof nodes as there are “layers” in your tree. So for a max depth of 20, you’ll need to pass in 20 proof nodes. Not only is this tedious, but since each proof node @@ -330,20 +311,21 @@ bidding system for your cNFTs. The canopy effectively caches proof nodes onchain so you don’t have to pass all of them into the transaction, allowing for more complex transactions. -Increasing any of these three values increases the size of the account, thereby +Increasing any of these three values increases the size of the account, which also increasing the cost associated with creating it. Weigh the benefits accordingly when choosing the values. Once you know these values, you can use the `createAllocTreeIx` helper function from the `@solana/spl-account-compression` TS SDK to create the instruction for -creating the empty account. +creating the Merkle Tree account. ```typescript import { createAllocTreeIx } from "@solana/spl-account-compression" +// Generate a treeKeypair for the Merkle tree const treeKeypair = Keypair.generate() -const allocTreeIx = await createAllocTreeIx( +const allocTreeInstruction = await createAllocTreeIx( connection, treeKeypair.publicKey, payer.publicKey, @@ -352,18 +334,17 @@ const allocTreeIx = await createAllocTreeIx( ) ``` -Note that this is simply a helper function for calculating the size required by -the account and creating the instruction to send to the System Program for -allocating the account. This function doesn’t interact with any -compression-specific programs yet. +Note that createAllocTreeIx is simply a helper function for calculating the size of the Merkle tree +account and creating the instruction to send to the System Program to +allocating space for it. This function doesn’t interact with any +compression-specific programs (like actually adding data to the tree). #### Use Bubblegum to Initialize Your Tree -With the empty tree account created, you then use the Bubblegum program to -initialize the tree. In addition to the Merkle tree account, Bubblegum creates a -tree config account to add cNFT-specific tracking and functionality. +With the empty tree account created, the next step is to initialize the tree using the Bubblegum program. This process involves creating a +tree config account which allows the Bubblegum Program to add cNFT-specific tracking and functionality. -Version 0.7 of the `@metaplex-foundation/mpl-bubblegum` TS SDK provides the +Version 0.7 of the `@metaplex-foundation/mpl-bubblegum` Typescript SDK provides the helper function `createCreateTreeInstruction` for calling the `create_tree` instruction on the Bubblegum program. As part of the call, you’ll need to derive the `treeAuthority` PDA expected by the program. This PDA uses the tree’s @@ -381,27 +362,27 @@ import { } from "@metaplex-foundation/mpl-bubblegum" ... - +//Derive the treeAuthority PDA from the tree's public key const [treeAuthority, _bump] = PublicKey.findProgramAddressSync( [treeKeypair.publicKey.toBuffer()], BUBBLEGUM_PROGRAM_ID ) - -const createTreeIx = createCreateTreeInstruction( +//Create the instruction to initialize the tree using Bubblegum +const createTreeInstruction = createCreateTreeInstruction( { - treeAuthority, - merkleTree: treeKeypair.publicKey, - payer: payer.publicKey, - treeCreator: payer.publicKey, - logWrapper: SPL_NOOP_PROGRAM_ID, - compressionProgram: SPL_ACCOUNT_COMPRESSION_PROGRAM_ID, + treeAuthority, //Derived PDA for treeAuthority + merkleTree: treeKeypair.publicKey,// Public key of the Merkle Tree + payer: payer.publicKey, //Payer for transaction fees + treeCreator: payer.publicKey,// the address that will be listed as the tree creator + logWrapper: SPL_NOOP_PROGRAM_ID,// no op program for exposing data to indexers + compressionProgram: SPL_ACCOUNT_COMPRESSION_PROGRAM_ID, // Compression program ID }, { - maxBufferSize: 256, - maxDepth: 20, - public: false, + maxBufferSize: 256, // Max buffer size + maxDepth: 20, // Max depth of the tree + public: false, // Set to false to restrict minting to the tree creator/delegate }, - BUBBLEGUM_PROGRAM_ID + BUBBLEGUM_PROGRAM_ID ) ``` @@ -441,12 +422,11 @@ Feel free to take a look at the program code #### Mint cNFTs -With the Merkle tree account and its corresponding Bubblegum tree config account -initialized, it’s possible to mint cNFTs to the tree. The Bubblegum instruction -to use will be either `mint_v1` or `mint_to_collection_v1`, depending on whether -or not you want to the minted cNFT to be part of a collection. +Now that we have the Merkle tree account and its corresponding Bubblegum tree config account +initialized, its time to mint cNFTs to the tree, we have the choice of using either `mint_v1` or `mint_to_collection_v1` Bubblegum instructions, depending on whether +you want to the minted cNFT to be part of a collection. -Version 0.7 of the `@metaplex-foundation/mpl-bubblegum` TS SDK provides helper +Version 0.7 of the `@metaplex-foundation/mpl-bubblegum` Typescript SDK provides helper functions `createMintV1Instruction` and `createMintToCollectionV1Instruction` to make it easier for you to create the instructions. @@ -455,7 +435,7 @@ accounts required to mint the cNFT. Below is an example of minting to a collection: ```typescript -const mintWithCollectionIx = createMintToCollectionV1Instruction( +const mintWithCollectionInstruction = createMintToCollectionV1Instruction( { payer: payer.publicKey, merkleTree: treeAddress, @@ -518,7 +498,7 @@ exclusive to minting without a collection. You can take a look at the example below. ```typescript -const mintWithoutCollectionIx = createMintV1Instruction( +const mintWithoutCollectionInstruction = createMintV1Instruction( { payer: payer.publicKey, merkleTree: treeAddress, @@ -553,8 +533,8 @@ support the Read API. In this lesson we’ll be using [Helius](https://docs.helius.dev/compression-and-das-api/digital-asset-standard-das-api) as they have free support for Devnet. -To use the Read API to fetch a specific cNFT, you need to have the cNFT’s asset -ID. However, after minting cNFTs, you’ll have at most two pieces of information: +To fetch a specific cNFT using the Read API you need to derive the cNFT’s asset +ID, from the leaf index (which you track). However, after minting cNFTs you’ll have at most two pieces of information: 1. The transaction signature 2. The leaf index (possibly) @@ -568,7 +548,7 @@ index. This is a reasonable assumption for most mints given that the minting will be controlled by your code and can be set up sequentially so that your code can -track which index is going to be used for each mint. I.e. the first mint will +track which index is going to be used for each mint. that is, the first mint will use index 0, the second index 1, etc. Once you have the leaf index, you can derive the cNFT’s corresponding asset ID. @@ -584,9 +564,8 @@ happen and stores the cNFT metadata that was hashed and stored in the Merkle tree. This enables them to surface that data when requested. This asset id is what the indexer uses to identify the particular asset. -For simplicity, you can just use the `getLeafAssetId` helper function from the -Bubblegum SDK. With the asset ID, fetching the cNFT is fairly straightforward. -Simply use the `getAsset` method provided by the supporting RPC provider: +Heres how you can fetch the cNFT using the `getLeafAssetId` helper function from the +Bubblegum SDK, and the `getAsset` method provided by your RPC provider: ```typescript const assetId = await getLeafAssetId(treeAddress, new BN(leafIndex)); @@ -607,8 +586,8 @@ const { result } = await response.json(); console.log(JSON.stringify(result, null, 2)); ``` -This will return a JSON object that is comprehensive of what a traditional NFT’s -on- and offchain metadata would look like combined. For example, you can find +After fetching the cNFT using the getAsset RPC method, the returned JSON object will contain metadata that is comprehensive of what a traditional NFT’s +on and offchain metadata would look like combined. For example, you can find the cNFT attributes at `content.metadata.attributes` or the image at `content.files.uri`. @@ -632,10 +611,10 @@ to learn how to use them correctly. #### Transfer cNFTs -Just as with a standard SPL token transfer, security is paramount. An SPL token +Just as with a standard SPL token transfer, security is important. An SPL token transfer, however, makes verifying transfer authority very easy. It’s built into the SPL Token program and standard signing. A compressed token’s ownership is -more difficult to verify. The actual verification will happen program-side, but +harder to verify. The actual verification will happen program-side, but your client-side code needs to provide additional information to make it possible. @@ -696,8 +675,7 @@ const assetProofResponse = await fetch(process.env.RPC_URL, { const assetProof = (await assetProofResponse.json()).result; ``` -The third step is to fetch the Merkle tree account. The simplest way to do this -is using the `ConcurrentMerkleTreeAccount` type from +The third step is to fetch the Merkle tree account. We do this by using the `ConcurrentMerkleTreeAccount` type from `@solana/spl-account-compression`: ```typescript @@ -709,7 +687,7 @@ const treeAccount = await ConcurrentMerkleTreeAccount.fromAccountAddress( ); ``` -Step four is the most conceptually challenging step. Using the three pieces of +Step four is the most complex step. Using the three pieces of information gathered, you’ll need to assemble the proof path for the cNFT’s corresponding leaf. The proof path is represented as accounts passed to the program instruction. The program uses each of the account addresses as proof @@ -771,7 +749,7 @@ const leafDelegate = assetData.ownership.delegate ? new PublicKey(assetData.ownership.delegate) : leafOwner; -const transferIx = createTransferInstruction( +const transferInstruction = createTransferInstruction( { merkleTree: treePublicKey, treeAuthority, @@ -806,8 +784,7 @@ functionality, take a look at the [Bubblegum client source code](https://github.com/metaplex-foundation/mpl-bubblegum/tree/main/clients/js-solita) and leverage the helper functions it provides. -Keep in mind that compression is fairly new. Available tooling will evolve -rapidly but the principles you’ve learned in this lesson will likely remain the +Keep in mind that compression is still new. Available tooling will improve quickly but the principles you’ve learned in this lesson will likely remain the same. These principles can also be broadened to arbitrary state compression, so be sure to master them here so you’re ready for more fun stuff in future lessons! @@ -818,12 +795,12 @@ Let’s jump in and practice creating and working with cNFTs. Together, we’ll build as simple a script as possible that will let us mint a cNFT collection from a Merkle tree. -#### 1. Get the starter code +### 1. Get the starter code First things first, clone the starter code from the `starter` branch of our -[cNFT lab repository](https://github.com/Unboxed-Software/solana-cnft-demo). +[cNFT lab repository](https://github.com/solana-developers/solana-cnft-demo). -`git clone https://github.com/Unboxed-Software/solana-cnft-demo.git` +`git clone https://github.com/solana-developers/solana-cnft-demo.git` `cd solana-cnft-demo` @@ -837,8 +814,8 @@ The `uri.ts` file provides 10k URIs that you can use for the offchain portion of your NFT metadata. You can, of course, create your own metadata. But this lesson isn’t explicitly about preparing metadata so we’ve provided some for you. -The `utils.ts` file has a few helper functions to keep you from writing more -unnecessary boilerplate than you need to. They are as follows: +The `utils.ts` file has a few helper functions to keep you from writing +unnecessary boilerplate code. They are as follows: - `getOrCreateKeypair` will create a new keypair for you and save it to a `.env` file, or if there’s already a private key in the `.env` file it will @@ -858,9 +835,9 @@ connection, calls `getOrCreateKeypair` to initialize a “wallet,” and calls We will be writing all of our code in the `index.ts`. -#### 2. Create the Merkle tree account +### 2. Create the Merkle tree account -We’ll start by creating the Merkle tree account. Let’s encapsulate this in a +We’ll start by creating the Merkle tree account. Let’s wrap this in a function that will eventually create _and_ initialize the account. We’ll put it below our `main` function in `index.ts`. Let’s call it `createAndInitializeTree`. For this function to work, it will need the following @@ -886,7 +863,7 @@ async function createAndInitializeTree( ) { const treeKeypair = Keypair.generate(); - const allocTreeIx = await createAllocTreeIx( + const allocTreeInstruction = await createAllocTreeIx( connection, treeKeypair.publicKey, payer.publicKey, @@ -896,9 +873,9 @@ async function createAndInitializeTree( } ``` -#### 3. Use Bubblegum to initialize the Merkle tree and create the tree config account +### 3. Use Bubblegum to initialize the Merkle tree and create the tree config account -With the instruction for creating the tree ready to go, we can create an +Now that the instruction for creating the tree is ready, we can create an instruction for invoking `create_tree` on the Bubblegum program. This will initialize the Merkle tree account _and_ create a new tree config account on the Bubblegum program. @@ -934,7 +911,7 @@ async function createAndInitializeTree( ) { const treeKeypair = Keypair.generate(); - const allocTreeIx = await createAllocTreeIx( + const allocTreeInstruction = await createAllocTreeIx( connection, treeKeypair.publicKey, payer.publicKey, @@ -947,7 +924,7 @@ async function createAndInitializeTree( BUBBLEGUM_PROGRAM_ID, ); - const createTreeIx = createCreateTreeInstruction( + const createTreeInstruction = createCreateTreeInstruction( { treeAuthority, merkleTree: treeKeypair.publicKey, @@ -963,13 +940,13 @@ async function createAndInitializeTree( }, ); - const tx = new Transaction().add(allocTreeIx, createTreeIx); + const transaction = new Transaction().add(allocTreeInstruction, createTreeInstruction); tx.feePayer = payer.publicKey; try { - const txSignature = await sendAndConfirmTransaction( + const transactionSignature = await sendAndConfirmTransaction( connection, - tx, + transaction, [treeKeypair, payer], { commitment: "confirmed", @@ -977,7 +954,9 @@ async function createAndInitializeTree( }, ); - console.log(`https://explorer.solana.com/tx/${txSignature}?cluster=devnet`); + const explorerLink = getExplorerLink("transaction",transactionSignature, "devnet") + + console.log(`Transaction submitted: ${explorerLink}`) console.log("Tree Address:", treeKeypair.publicKey.toBase58()); @@ -989,12 +968,12 @@ async function createAndInitializeTree( } ``` -If you want to test what you have so far, feel free to call +To test what you have so far, call `createAndInitializeTree` from `main` and provide small values for the max depth and max buffer size. ```typescript -async function main() { + function main() { const connection = new Connection(clusterApiUrl("devnet"), "confirmed"); const wallet = await getOrCreateKeypair("Wallet_1"); await airdropSolIfNeeded(wallet.publicKey); @@ -1015,7 +994,7 @@ async function main() { } ``` -Keep in mind that Devnet SOL is throttled so if you test too many times you +Remember that Devnet SOL is limited so if you test too many times you might run out of Devnet SOL before we get to minting. To test, in your terminal run the following: @@ -1023,8 +1002,8 @@ run the following: #### 4. Mint cNFTs to your tree -Believe it or not, that’s all you needed to do to set up your tree to compressed -NFTs! Now let’s turn our attention to minting. +Believe it or not, that’s all it takes to set up your tree to compressed +NFTs! Now let’s focus on the minting process. First, let’s declare a function called `mintCompressedNftToCollection`. It will need the following parameters: @@ -1085,7 +1064,7 @@ them real quick: When you put it all together, this is what it’ll look like: ```typescript -async function mintCompressedNftToCollection( + function mintCompressedNftToCollection( connection: Connection, payer: Keypair, treeAddress: PublicKey, @@ -1098,7 +1077,7 @@ async function mintCompressedNftToCollection( BUBBLEGUM_PROGRAM_ID, ); - // Derive the bubblegum signer, used by the Bubblegum program to handle "collection verification" + // Derive the Bubblegum signer, used by the Bubblegum program to handle "collection verification" // Only used for `createMintToCollectionV1` instruction const [bubblegumSigner] = PublicKey.findProgramAddressSync( [Buffer.from("collection_cpi", "utf8")], @@ -1110,7 +1089,7 @@ async function mintCompressedNftToCollection( const compressedNFTMetadata = createNftMetadata(payer.publicKey, i); // Create the instruction to "mint" the compressed NFT to the tree - const mintIx = createMintToCollectionV1Instruction( + const mintInstruction = createMintToCollectionV1Instruction( { payer: payer.publicKey, // The account that will pay for the transaction merkleTree: treeAddress, // The address of the tree account @@ -1137,25 +1116,25 @@ async function mintCompressedNftToCollection( try { // Create new transaction and add the instruction - const tx = new Transaction().add(mintIx); + const instruction = new Transaction().add(mintInstruction); // Set the fee payer for the transaction - tx.feePayer = payer.publicKey; + transaction.feePayer = payer.publicKey; // Send the transaction - const txSignature = await sendAndConfirmTransaction( + const transactionSignature = await sendAndConfirmTransaction( connection, - tx, + instruction, [payer], { commitment: "confirmed", skipPreflight: true }, ); - console.log( - `https://explorer.solana.com/tx/${txSignature}?cluster=devnet`, - ); - } catch (err) { - console.error("\nFailed to mint compressed NFT:", err); - throw err; + const explorerLink = getExplorerLink("transaction",transactionSignature, "devnet") + + console.log(`Transaction submitted: ${explorerLink}`) + } catch (error) { + console.error("\nFailed to mint compressed NFT:", error); + throw error; } } } @@ -1165,7 +1144,7 @@ This is a great point to test with a small tree. Simply update `main` to call `getOrCreateCollectionNFT` then `mintCompressedNftToCollection`: ```typescript -async function main() { +function main() { const connection = new Connection(clusterApiUrl("devnet"), "confirmed"); const wallet = await getOrCreateKeypair("Wallet_1"); await airdropSolIfNeeded(wallet.publicKey); @@ -1205,13 +1184,13 @@ their data. This is tricky because the onchain data is just the Merkle tree account, the data from which can be used to verify existing information as accurate but is useless in conveying what the information is. -Let’s start by declaring a function `logNftDetails` that takes as parameters +Let’s start by declaring a function `logNftDetails` that takes two parameters `treeAddress` and `nftsMinted`. -At this point we don’t actually have a direct identifier of any kind that points -to our cNFT. To get that, we’ll need to know the leaf index that was used when +Since theres no direct identifier of any kind that points +to our cNFT, we need to know the leaf index that was used when we minted our cNFT. We can then use that to derive the asset ID used by the Read -API and subsequently use the Read API to fetch our cNFT data. +API and then use the Read API to fetch our cNFT data. In our case, we created a non-public tree and minted 8 cNFTs, so we know that the leaf indexes used were 0-7. With this, we can use the `getLeafAssetId` @@ -1234,7 +1213,7 @@ Then simply issue a POST request to your provided RPC URL and put the `getAsset` information in the body: ```typescript -async function logNftDetails(treeAddress: PublicKey, nftsMinted: number) { +function logNftDetails(treeAddress: PublicKey, nftsMinted: number) { for (let i = 0; i < nftsMinted; i++) { const assetId = await getLeafAssetId(treeAddress, new BN(i)); console.log("Asset ID:", assetId.toBase58()); @@ -1256,12 +1235,12 @@ async function logNftDetails(treeAddress: PublicKey, nftsMinted: number) { } ``` -Helius essentially observes transaction logs as they happen and stores the NFT +Helius monitors transaction logs in real time and stores the NFT metadata that was hashed and stored in the Merkle tree. This enables them to -surface that data when requested. +display that data when requested. If we add a call to this function at the end of `main` and re-run your script, -the data we get back in the console is very comprehensive. It includes all of +the data we get back in the console is very detailed. It includes all of the data you’d expect in both the onchain and offchain portion of a traditional NFT. You can find the cNFT’s attributes, files, ownership and creator information, and more. @@ -1365,13 +1344,13 @@ to see what’s available. #### 6. Transfer a cNFT The last thing we’re going to add to our script is a cNFT transfer. Just as with -a standard SPL token transfer, security is paramount. Unlike with a standard SPL +a standard SPL token transfer, security is important. Unlike regular standard SPL token transfer, however, to build a secure transfer with state compression of any kind, the program performing the transfer needs the entire asset data. The program, Bubblegum in this case, needs to be provided with the entire data that was hashed and stored on the corresponding leaf _and_ needs to be given the -“proof path” for the leaf in question. That makes cNFT transfers a bit trickier +“proof path” for the leaf in question. That makes cNFT transfers a bit more complicated than SPL token transfers. Remember, the general steps are: @@ -1393,7 +1372,7 @@ Inside that function, let’s fetch the asset data again then also fetch the ass proof. For good measure, let’s wrap everything in a `try catch`. ```typescript -async function transferNft( + function transferNft( connection: Connection, assetId: PublicKey, sender: Keypair, @@ -1427,9 +1406,9 @@ async function transferNft( }), }); const assetProof = (await assetProofResponse.json()).result; - } catch (err: any) { - console.error("\nFailed to transfer nft:", err); - throw err; + } catch (error: any) { + console.error("\nFailed to transfer nft:", error); + throw error; } } ``` @@ -1440,7 +1419,7 @@ Helius to a list of `AccountMeta` objects, then removing any proof nodes at the end that are already cached onchain in the canopy. ```typescript -async function transferNft( + function transferNft( connection: Connection, assetId: PublicKey, sender: Keypair, @@ -1465,9 +1444,9 @@ async function transferNft( isWritable: false, })) .slice(0, assetProof.proof.length - canopyDepth) - } catch (err: any) { + } catch (error: any) { console.error("\nFailed to transfer nft:", err) - throw err + throw error } } ``` @@ -1477,7 +1456,7 @@ transaction, then sign and send the transaction. This is what the entire `transferNft` function looks like when finished: ```typescript -async function transferNft( +function transferNft( connection: Connection, assetId: PublicKey, sender: Keypair, @@ -1535,7 +1514,7 @@ async function transferNft( ? new PublicKey(assetData.ownership.delegate) : leafOwner; - const transferIx = createTransferInstruction( + const transferInstruction = createTransferInstruction( { merkleTree: treePublicKey, treeAuthority, @@ -1559,18 +1538,20 @@ async function transferNft( }, ); - const tx = new Transaction().add(transferIx); - tx.feePayer = sender.publicKey; + const transaction = new Transaction().add(transferInstruction); + transaction.feePayer = sender.publicKey; const txSignature = await sendAndConfirmTransaction( connection, - tx, + transaction, [sender], { commitment: "confirmed", skipPreflight: true, }, ); - console.log(`https://explorer.solana.com/tx/${txSignature}?cluster=devnet`); + const explorerLink = getExplorerLink("transaction",transactionId, "devnet") + + console.log(`Transaction submitted: ${explorerLink}`) } catch (err: any) { console.error("\nFailed to transfer nft:", err); throw err; @@ -1585,7 +1566,7 @@ entire collection using our function `logNftDetails`. You’ll note that the NFT at index zero will now belong to our new wallet in the `ownership` field. ```typescript -async function main() { +function main() { const connection = new Connection(clusterApiUrl("devnet"), "confirmed"); const wallet = await getOrCreateKeypair("Wallet_1"); await airdropSolIfNeeded(wallet.publicKey); @@ -1642,12 +1623,12 @@ to try and batch these instructions for fewer total transactions. If you need more time with this lab, feel free to go through it again and/or take a look at the solution code on the `solution` branch of the -[lab repo](https://github.com/Unboxed-Software/solana-cnft-demo/tree/solution). +[lab repo](https://github.com/solana-developers/solana-cnft-demo/tree/solution). ### Challenge -It’s your turn to take these concepts for a spin on your own! We’re not going to -be overly prescriptive at this point, but here are some ideas: +It’s your turn to apply these concepts on your own! We’re not going to +gove you detailed instructions at this point, but here are some ideas: 1. Create your own production cNFT collection 2. Build a UI for this lesson’s lab that will let you mint a cNFT and display it From 3302fe7131beffb6d26e12e225b1fc6d2d941233 Mon Sep 17 00:00:00 2001 From: adpthegreat Date: Thu, 26 Sep 2024 19:26:07 +0100 Subject: [PATCH 2/6] - Followed guidelines in contributing.md - Refactored it to use newest dependencies - Replaced @metaplex- foundation/js with @metaplex-foundation-umi - Replaced old methods with their corresponding umi methods - Replaced all old code snippets - Gave clearer variable names - Removed redundant code - Ran prettier --- .../state-compression/compressed-nfts.md | 1142 ++++++----------- 1 file changed, 410 insertions(+), 732 deletions(-) diff --git a/content/courses/state-compression/compressed-nfts.md b/content/courses/state-compression/compressed-nfts.md index 0b6241d9c..b5b9d5158 100644 --- a/content/courses/state-compression/compressed-nfts.md +++ b/content/courses/state-compression/compressed-nfts.md @@ -20,36 +20,43 @@ description: - Supporting RPC providers **index** cNFT data offchain when the cNFT is minted so that you can use the **Read API** to access the data. - The **Metaplex Bubblegum program** is an abstraction on top of the **State - Compression** program that enables you to create, mint, and manage - cNFT collections. + Compression** program that enables you to create, mint, and manage cNFT + collections. ## Lesson Compressed NFTs (cNFTs) are exactly what their name suggests: NFTs whose structure takes up less account storage than traditional NFTs. Compressed NFTs -use a technique called [**State Compression**](https://solana.com/docs/advanced/state-compression) to store data in a way that -drastically reduces costs. +use a technique called +[**State Compression**](https://solana.com/docs/advanced/state-compression) to +store data in a way that drastically reduces costs. -Solana’s transaction costs are so low that most users don't think about the cost of minting NFTs at scale. However, minting 1 million -NFTs can cost around 24,000 SOL. In contrast, cNFTs can be -set up and minted for 10 SOL or less, meaning costs can be reduced -by over 1000x by using cNFTs. +Solana’s transaction costs are so low that most users don't think about the cost +of minting NFTs at scale. However, minting 1 million NFTs can cost around 24,000 +SOL. In contrast, cNFTs can be set up and minted for 10 SOL or less, meaning +costs can be reduced by over 1000x by using cNFTs. -While cNFTs are much cheaper, they can be more complex to work with. Over time, tools will evolve to make using cNFTs as easy as traditional NFTs. +While cNFTs are much cheaper, they can be more complex to work with. Over time, +tools will evolve to make using cNFTs as easy as traditional NFTs. -But for now, you’ll still need to understand the technical details, so let’s dive in! +But for now, you’ll still need to understand the technical details, so let’s +dive in! ### A theoretical overview of cNFTs Most of the costs of traditional NFTs comes from the need for account storage -space. Compressed NFTs (cNFTs) use State Compression to store data in -the blockchain’s cheaper **ledger state**, using the more expensive account storage space -only to store a “fingerprint”, or **hash**, of the data. This hash allows you to -cryptographically verify that data has not been altered. - -To store and verify these hashes , we use a special binary tree -structure known as a [**Concurrent Merkle Tree**](https://developers.metaplex.com/bubblegum/concurrent-merkle-trees). This tree structure combines data through hashing in a determininistic way, eventually creating a single smaller hash called a "root hash" that's stored onchain, this process ccompresses the data, hence the “compression.” The steps to this process -are: +space. Compressed NFTs (cNFTs) use State Compression to store data in the +blockchain’s cheaper **ledger state**, using the more expensive account storage +space only to store a “fingerprint”, or **hash**, of the data. This hash allows +you to cryptographically verify that data has not been altered. + +To store and verify these hashes , we use a special binary tree structure known +as a +[**Concurrent Merkle Tree**](https://developers.metaplex.com/bubblegum/concurrent-merkle-trees). +This tree structure combines data through hashing in a determininistic way, +eventually creating a single smaller hash called a "root hash" that's stored +onchain, this process ccompresses the data, hence the “compression.” The steps +to this process are: 1. Take any piece of data 2. Create a hash of the data @@ -64,30 +71,32 @@ are: truth” can go through the same process and compare the final hash without having to store all the data onchain -One challenge in the process above is how to make data available if its not directly stored in an account. Since this hashing happens onchain, all the -data exists in the ledger state and could theoretically be retrieved from the -original transaction by replaying the entire chain state from the origin. However, -it’s easier (though still complicated) to have an **indexer** -track and index this data as the transactions occur. This ensures there is an -offchain “cache” of the data that anyone can access and subsequently verify -against the onchain root hash. +One challenge in the process above is how to make data available if its not +directly stored in an account. Since this hashing happens onchain, all the data +exists in the ledger state and could theoretically be retrieved from the +original transaction by replaying the entire chain state from the origin. +However, it’s easier (though still complicated) to have an **indexer** track and +index this data as the transactions occur. This ensures there is an offchain +“cache” of the data that anyone can access and subsequently verify against the +onchain root hash. This process is _complicated_. We’ll cover some of the key concepts below but -don’t worry if you don’t understand it right away. We’ll cover more theory in the -state compression lesson and focus on applying it to NFTs. By the end of this lesson, you’ll be able to work with cNFTs even if you -don’t fully understand the whole state compression process. +don’t worry if you don’t understand it right away. We’ll cover more theory in +the state compression lesson and focus on applying it to NFTs. By the end of +this lesson, you’ll be able to work with cNFTs even if you don’t fully +understand the whole state compression process. #### Concurrent Merkle trees -A **Merkle tree** is a binary tree structure that represents data as a single hash. Every -leaf node in the structure is a hash of its inner data while every branch is a -hash of its child leaf hashes. In turn, branches are also hashed together until -eventually one final root hash remains. +A **Merkle tree** is a binary tree structure that represents data as a single +hash. Every leaf node in the structure is a hash of its inner data while every +branch is a hash of its child leaf hashes. In turn, branches are also hashed +together until eventually one final root hash remains. Any modification to leaf data changes the root hash. This causes a problem when -multiple transactions in the same slot try to update the tree at the same time. Since -these transactions must execute in series, all but the first will fail since the -root hash and proof passed in will have been invalidated by the first +multiple transactions in the same slot try to update the tree at the same time. +Since these transactions must execute in series, all but the first will fail +since the root hash and proof passed in will have been invalidated by the first transaction to be executed. A **Concurrent Merkle Tree** is a Merkle tree that stores a secure changelog of @@ -97,26 +106,29 @@ changelog can be used as a source of truth to allow for concurrent changes to be made to the tree. When working with a concurrent Merkle tree, there are three variables that -determine the size, the cost to create the tree, and how many number changes that can be made to the tree at once: +determine the size, the cost to create the tree, and how many number changes +that can be made to the tree at once: 1. Max depth 2. Max buffer size 3. Canopy depth -The **max depth** is the maximum number of layers or "hops" to get from any leaf to the root -of the tree. Since its a type of binary tree, every leaf is connected only -to one other leaf. Max depth can then logically be used to calculate the number -of nodes for the tree with `2 ^ maxDepth`. +The **max depth** is the maximum number of layers or "hops" to get from any leaf +to the root of the tree. Since its a type of binary tree, every leaf is +connected only to one other leaf. Max depth can then logically be used to +calculate the number of nodes for the tree with `2 ^ maxDepth`. The **max buffer size** is effectively the maximum number of concurrent changes that you can make to a tree within a single slot with the root hash still being valid. -The **canopy depth** is the number of proof nodes that are stored onchain for verification. To verify a leaf you need the complete proof path for -the tree. The complete proof path is made up of one proof node for every “layer” -of the tree, i.e. a max depth of 14 means there are 14 proof nodes. The larger the tree, the more proof nodes there are, and each -node adds 32 bytes to a transaction, which can quickly exceed the -maximum transaction size limit , so caching proof nodes onchain helps manaage this. +The **canopy depth** is the number of proof nodes that are stored onchain for +verification. To verify a leaf you need the complete proof path for the tree. +The complete proof path is made up of one proof node for every “layer” of the +tree, i.e. a max depth of 14 means there are 14 proof nodes. The larger the +tree, the more proof nodes there are, and each node adds 32 bytes to a +transaction, which can quickly exceed the maximum transaction size limit , so +caching proof nodes onchain helps manaage this. Each of these three values, max depth, max buffer size, and canopy depth, comes with a tradeoff. Increasing the value of any of these values increases the size @@ -133,68 +145,71 @@ concurrent writes do you need. #### SPL State Compression and Noop Programs -The SPL State Compression Program simplifies and standardizes the process of using merkle trees across the Solana ecosystem. It provides key functionalities for -initializing Merkle trees, managing tree leafs (i.e. add, update, remove data), -and verifying leaf data. +The SPL State Compression Program simplifies and standardizes the process of +using merkle trees across the Solana ecosystem. It provides key functionalities +for initializing Merkle trees, managing tree leafs (i.e. add, update, remove +data), and verifying leaf data. -The State Compression Program also leverages a separate “no op” (No Operation) program whose -primary purpose log leaf data to the ledger state making it easier to index leaf data. +The State Compression Program also leverages a separate “No op” (No Operation) +program whose primary purpose log leaf data to the ledger state making it easier +to index leaf data. #### Use the Ledger State for storage -The Solana ledger is a continuous record of signed transactions, -theoretically traceable all the way back to the genesis block. This means any -data that has ever been put into a transaction exists in the ledger. +The Solana ledger is a continuous record of signed transactions, theoretically +traceable all the way back to the genesis block. This means any data that has +ever been put into a transaction exists in the ledger. When you want to store compressed data, you pass it to the State Compression -program where it gets hashed and emitted as an “event” to the no op program. The +program where it gets hashed and emitted as an “event” to the Noop program. The hash is then stored in the corresponding concurrent Merkle tree. Since the data -passed through a transaction and even exists on the no op program logs, it will +passed through a transaction and even exists on the Noop program logs, it will exist on the ledger state permanently. #### Index data for easy lookup -Normally, you would access onchain data by fetching it from an account. -However, when using state compression, its a bit more complicated . +Normally, you would access onchain data by fetching it from an account. However, +when using state compression, its a bit more complicated . -Instead of being stored in an account, compressed data resides in the ledger state. -The easiest way to access the full data is through the logs of the no op +Instead of being stored in an account, compressed data resides in the ledger +state. The easiest way to access the full data is through the logs of the Noop instruction, but while this data will in a sense exist in the ledger state forever, it will likely be inaccessible through validators after a certain period of time. To save space and be more performant, validators don’t retain every transaction back to the genesis block. The specific amount of time you’ll be able to access -the no op instruction logs related to your data will vary based on the validator, +the Noop instruction logs related to your data will vary based on the validator, but eventually you’ll lose access to it if you’re relying directly on instruction logs. -Technically, it is possible to replay the entire transaction history back to the genesis block but -this is impractical and unperformant for most teams. -Instead, a better approach is using an indexer that will observe the events sent to the no op -program and store the relevant data offchain. That way you don’t need to worry -about old data becoming inaccessible. +Technically, it is possible to replay the entire transaction history back to the +genesis block but this is impractical and unperformant for most teams. Instead, +a better approach is using an indexer that will observe the events sent to the +Noop program and store the relevant data offchain. That way you don’t need to +worry about old data becoming inaccessible. ### Create a cNFT Collection -Now that we've covered the theory, let’s focus on the -main point of this lesson: how to create a cNFT collection. +Now that we've covered the theory, let’s focus on the main point of this lesson: +how to create a cNFT collection. Fortunately, you can use tools created by Solana Foundation, the Solana developer community, and Metaplex to simplify the process. Specifically, we’ll be using the `@solana/spl-account-compression` SDK, the Metaplex Bubblegum program, and the Bubblegum program’s corresponding typescript SDK -`@metaplex-foundation/mpl-bugglegum`. +`@metaplex-foundation/mpl-bugglegum` alongside the Metaplex umi library +`@metaplex-foundation/umi`. #### Prepare metadata -Before starting, you’ll need to prepare your NFT metadata similar to how you would with a Candy Machine. An NFT is simply a token with -metadata that follows the NFT standard. In other words, heres an example of how it should look like: +Before starting, you’ll need to prepare your NFT metadata similar to how you +would with a Candy Machine. An NFT is simply a token with metadata that follows +the NFT standard. In other words, heres an example of how it should look like: ```json { @@ -220,37 +235,91 @@ metadata that follows the NFT standard. In other words, heres an example of how } ``` -Depending on your project, you may be able to generate this metadata dynamically or you have a separate JSON file prepared for each cNFT . You’ll also -need any other assets referenced by the JSON, such as the `image` url shown in -the example above. +Depending on your project, you may be able to generate this metadata dynamically +or you have a separate JSON file prepared for each cNFT . You’ll also need any +other assets referenced by the JSON, such as the `image` url shown in the +example above. + +#### Setting up Umi + +Before we start creating Collection NFTs we have to setup Umi. Umi is a modular +framework for building and using JavaScript clients for Solana onchain programs +that was created by Metaplex. Note that Umi provides distinct implementations +for many components compared to web3.js, such as Keypairs, PublicKeys, and +Connections, but converting from web3.js versions to Umi equivalents is simple. +To begin, we first need to initialize an Umi instance. + +```typescript +import { createUmi } from "@metaplex-foundation/umi-bundle-defaults"; +import { clusterApiUrl } from "@solana/web3.js"; + +const umi = createUmi(clusterApiUrl("devnet")); +``` + +Next we have to attach a signer to our Umi instance + +```typescript +import { clusterApiUrl } from "@solana/web3.js"; +import { createTree, mplBubblegum } from "@metaplex-foundation/mpl-bubblegum"; +import { dasApi } from "@metaplex-foundation/digital-asset-standard-api"; +import { keypairIdentity } from "@metaplex-foundation/umi"; +import { createUmi } from "@metaplex-foundation/umi-bundle-defaults"; +import { getOrCreateKeypair } from "./utils"; + +const umi = createUmi(clusterApiUrl("devnet")); + +//get keypair from .env file or create a new one +const wallet = await getOrCreateKeypair("Wallet1"); + +// convert to Umi compatible keypair +const umiKeypair = umi.eddsa.createKeypairFromSecretKey(wallet.secretKey); + +// Load the DAS API and MPL Bubblegum plugins into Umi, and set the Umi identity using a keypair, which acts as the signer for transactions. +umi.use(keypairIdentity(umiKeypair)).use(mplBubblegum()).use(dasApi()); +``` #### Create Collection NFT If you want your cNFTs to be part of a collection, you’ll need to create a -Collection NFT **before** you start minting cNFTs. A Collection NFT is a traditional NFT -that serves as the reference binding your cNFTs together into a single collection. -To create a Collection NFT you can use the `@metaplex-foundation/js` library. Just make -sure you set `isCollection` field to `true`. +Collection NFT **before** you start minting cNFTs. A Collection NFT is a +traditional NFT that serves as the reference binding your cNFTs together into a +single collection. To create a Collection NFT we will use the `createNft` method +from the `@metaplex-foundation/mpl-token-metadata` library. Just make sure you +set `isCollection` field to `true`. You can find the documentation for the +createNft method on the "Minting Assets" page, in the "Create helpers" section +at +[Metaplex token-metadata documentation](https://developers.metaplex.com/token-metadata/mint). + +In the code below, `generateSigner(umi)` is used to create a new keypair (or +signer) that represents the mint address for the Collection NFT. This signer is +a unique address that will serve as the mint for your Collection NFT. It ensures +that each Collection NFT has a distinct mint address. ```typescript -const collectionNft = await metaplex.nfts().create({ - uri: someUri, +import { percentAmount, generateSigner } from "@metaplex-foundation/umi"; +import { createNft } from "@metaplex-foundation/mpl-token-metadata"; + +const collectionMint = generateSigner(umi); + +await createNft(umi, { + mint: collectionMint, name: "Collection NFT", - sellerFeeBasisPoints: 0, - updateAuthority: somePublicKey, - mintAuthority: somePublicKey, - tokenStandard: 0, + uri: randomUri, + authority: umi.identity, + updateAuthority: umi.identity.publicKey, + sellerFeeBasisPoints: percentAmount(0), symbol: "Collection", isMutable: true, isCollection: true, -}); +}).sendAndConfirm(umi, { send: { commitment: "finalized" } }); ``` #### Create Merkle tree Account -When creating compressed NFTs (cNFTs), you need to setup an an account for the Concurrent Merkle Tree. This Merkle tree account belongs to the -SPL State Compression program. Before you can do any cNFT related actions, you -need to create an empty Merkle tree account with the appropriate size. +When creating compressed NFTs (cNFTs), you need to setup an an account for the +Concurrent Merkle Tree. This Merkle tree account belongs to the SPL State +Compression program. Before you can do any cNFT related actions, you need to +create an empty Merkle tree account with the appropriate size. The variables impacting the size of the account are: @@ -258,9 +327,9 @@ The variables impacting the size of the account are: 2. Max buffer size 3. Canopy depth -The Max depth and Max buffer size must be selected from an existing set of valid pairs. The -table below shows the valid pairs along with the number of cNFTs that can be -created with those values. +The Max depth and Max buffer size must be selected from an existing set of valid +pairs. The table below shows the valid pairs along with the number of cNFTs that +can be created with those values. | Max Depth | Max Buffer Size | Max Number of cNFTs | | --------- | --------------- | ------------------- | @@ -311,210 +380,108 @@ bidding system for your cNFTs. The canopy effectively caches proof nodes onchain so you don’t have to pass all of them into the transaction, allowing for more complex transactions. -Increasing any of these three values increases the size of the account, which also -increasing the cost associated with creating it. Weigh the benefits accordingly -when choosing the values. +Increasing any of these three values increases the size of the account, which +also increasing the cost associated with creating it. Weigh the benefits +accordingly when choosing the values. -Once you know these values, you can use the `createAllocTreeIx` helper function -from the `@solana/spl-account-compression` TS SDK to create the instruction for -creating the Merkle Tree account. +Once you know these values, you can use the `createTree` method from the +@metaplex-foundation/mpl-bubblegum package to create your tree. ```typescript -import { createAllocTreeIx } from "@solana/spl-account-compression" - -// Generate a treeKeypair for the Merkle tree -const treeKeypair = Keypair.generate() - -const allocTreeInstruction = await createAllocTreeIx( - connection, - treeKeypair.publicKey, - payer.publicKey, - { maxDepth: 20; maxBufferSize: 256 }, - canopyDepth -) -``` - -Note that createAllocTreeIx is simply a helper function for calculating the size of the Merkle tree -account and creating the instruction to send to the System Program to -allocating space for it. This function doesn’t interact with any -compression-specific programs (like actually adding data to the tree). - -#### Use Bubblegum to Initialize Your Tree +import { createTree } from "@metaplex-foundation/mpl-bubblegum"; +import { + ValidDepthSizePair, +} from "@solana/spl-account-compression" -With the empty tree account created, the next step is to initialize the tree using the Bubblegum program. This process involves creating a -tree config account which allows the Bubblegum Program to add cNFT-specific tracking and functionality. + const maxDepthSizePair: ValidDepthSizePair = { + maxDepth: 3, + maxBufferSize: 8, + }; -Version 0.7 of the `@metaplex-foundation/mpl-bubblegum` Typescript SDK provides the -helper function `createCreateTreeInstruction` for calling the `create_tree` -instruction on the Bubblegum program. As part of the call, you’ll need to derive -the `treeAuthority` PDA expected by the program. This PDA uses the tree’s -address as a seed. + const merkleTree = generateSigner(umi); -```typescript -import { - createAllocTreeIx, - SPL_ACCOUNT_COMPRESSION_PROGRAM_ID, - SPL_NOOP_PROGRAM_ID, -} from "@solana/spl-account-compression" -import { - PROGRAM_ID as BUBBLEGUM_PROGRAM_ID, - createCreateTreeInstruction, -} from "@metaplex-foundation/mpl-bubblegum" - -... -//Derive the treeAuthority PDA from the tree's public key -const [treeAuthority, _bump] = PublicKey.findProgramAddressSync( - [treeKeypair.publicKey.toBuffer()], - BUBBLEGUM_PROGRAM_ID -) -//Create the instruction to initialize the tree using Bubblegum -const createTreeInstruction = createCreateTreeInstruction( - { - treeAuthority, //Derived PDA for treeAuthority - merkleTree: treeKeypair.publicKey,// Public key of the Merkle Tree - payer: payer.publicKey, //Payer for transaction fees - treeCreator: payer.publicKey,// the address that will be listed as the tree creator - logWrapper: SPL_NOOP_PROGRAM_ID,// no op program for exposing data to indexers - compressionProgram: SPL_ACCOUNT_COMPRESSION_PROGRAM_ID, // Compression program ID - }, - { - maxBufferSize: 256, // Max buffer size - maxDepth: 20, // Max depth of the tree + const builder = await createTree(umi, { + merkleTree, + maxDepth: maxDepthSizePair.maxDepth, // Max depth of the tree + maxBufferSize: maxDepthSizePair.maxBufferSize // Max buffer size, public: false, // Set to false to restrict minting to the tree creator/delegate - }, - BUBBLEGUM_PROGRAM_ID -) + }); + ``` -The list below shows the required input for this helper function: - -- `accounts` - An object representing the accounts required by the instruction. - This includes: - - `treeAuthority` - Bubblegum expects this to be a PDA derived using the - Merkle tree address as a seed - - `merkleTree` - The Merkle tree account - - `payer` - The address paying for transaction fees, rent, etc. - - `treeCreator` - The address to list as the tree creator - - `logWrapper` - The program to use to expose the data to indexers through - logs; this should be the address of the SPL Noop program unless you have - some other custom implementation - - `compressionProgram` - The compression program to use for initializing the - Merkle tree; this should be the address of the SPL State Compression program - unless you have some other custom implementation -- `args` - An object representing additional arguments required by the - instruction. This includes: - - `maxBufferSize` - The max buffer size of the Merkle tree - - `maxDepth` - The max depth of the Merkle tree - - `public` - When set to `true`, anyone will be able to mint cNFTs from the - tree; when set to `false`, only the tree creator or tree delegate will be - able to min cNFTs from the tree +In the code above, we define an object of type `ValidDepthSizePair` from the +`@solana/spl-account-compression` program, setting maxDepth to 3 and +maxBufferSize to 8 to define valid liimits for the Merkle tree. We then generate +a merkleTree signer using `generateSigner` with the umi instance, after we've +done this, we now invoke `createTree`, passing the umi instance, the merkleTree +signer, and the parameters from maxDepthSizePair to configure the tree's maximum +depth and buffer size, then we set the public parameter to false, restricting +minting to the tree's creator or delegate , doing so is optional. When submitted, this will invoke the `create_tree` instruction on the Bubblegum -program. This instruction does three things: +program under the hood. This instruction does three things: 1. Creates the tree config PDA account -2. Initializes the tree config account with appropriate initial values +2. Initializes the `TreeConfig` account with appropriate initial values that + holds additional data exclusive to compressed NFTs such as the tree creator, + whether the tree is public. 3. Issues a CPI to the State Compression program to initialize the empty Merkle tree account -Feel free to take a look at the program code -[here](https://github.com/metaplex-foundation/mpl-bubblegum/blob/main/programs/bubblegum/program/src/lib.rs#L887). +Feel free to take a look at the source code for the create_tree instruction and +the TreeConfig account -#### Mint cNFTs +- [Create Tree](https://github.com/metaplex-foundation/mpl-bubblegum/blob/df5b5feae8c161a7e22b9878a3b30a62f92ee864/programs/bubblegum/program/src/processor/create_tree.rs#L40) -Now that we have the Merkle tree account and its corresponding Bubblegum tree config account -initialized, its time to mint cNFTs to the tree, we have the choice of using either `mint_v1` or `mint_to_collection_v1` Bubblegum instructions, depending on whether -you want to the minted cNFT to be part of a collection. +- [Tree Config](https://github.com/metaplex-foundation/mpl-bubblegum/blob/42ffed35da6b2a673efacd63030a360eac3ae64e/programs/bubblegum/program/src/state/mod.rs#L17) -Version 0.7 of the `@metaplex-foundation/mpl-bubblegum` Typescript SDK provides helper -functions `createMintV1Instruction` and `createMintToCollectionV1Instruction` to -make it easier for you to create the instructions. +#### Mint cNFTs + +Now that we have the Merkle tree account and its corresponding Bubblegum tree +config account initialized, its time to mint cNFTs to the tree, we use `mintV1` +or `mintToCollectionV1` from the `@metaplex-foundation/mpl-bubblegum` package, +depending on whether we want the minted cNFT to be part of a collection. -Both functions will require you to pass in the NFT metadata and a list of -accounts required to mint the cNFT. Below is an example of minting to a -collection: +1. mintV1 ```typescript -const mintWithCollectionInstruction = createMintToCollectionV1Instruction( - { - payer: payer.publicKey, - merkleTree: treeAddress, - treeAuthority, - treeDelegate: payer.publicKey, - leafOwner: destination, - leafDelegate: destination, - collectionAuthority: payer.publicKey, - collectionAuthorityRecordPda: BUBBLEGUM_PROGRAM_ID, - collectionMint: collectionDetails.mint, - collectionMetadata: collectionDetails.metadata, - editionAccount: collectionDetails.masterEditionAccount, - compressionProgram: SPL_ACCOUNT_COMPRESSION_PROGRAM_ID, - logWrapper: SPL_NOOP_PROGRAM_ID, - bubblegumSigner, - tokenMetadataProgram: TOKEN_METADATA_PROGRAM_ID, - }, - { - metadataArgs: Object.assign(nftMetadata, { - collection: { key: collectionDetails.mint, verified: false }, - }), +await mintV1(umi, { + leafOwner, + merkleTree, + metadata: { + name: "My Compressed NFT", + uri: "https://example.com/my-cnft.json", + sellerFeeBasisPoints: 0, // 0% + collection: none(), + creators: [ + { address: umi.identity.publicKey, verified: false, share: 100 }, + ], }, -); +}).sendAndConfirm(umi); ``` -Notice that there are two arguments for the helper function: `accounts` and -`args`. The `args` parameter is simply the NFT metadata, while `accounts` is an -object listing the accounts required by the instruction. There are admittedly a -lot of them: - -- `payer` - the account that will pay for the transaction fees, rent, etc. -- `merkleTree` - the Merkle tree account -- `treeAuthority` - the tree authority; should be the same PDA you derived - previously -- `treeDelegate` - the tree delegate; this is usually the same as the tree - creator -- `leafOwner` - the desired owner of the compressed NFT being minted -- `leafDelegate` - the desired delegate of the compressed NFT being minted; this - is usually the same as the leaf owner -- `collectionAuthority` - the authority of the collection NFT -- `collectionAuthorityRecordPda` - optional collection authority record PDA; - there typically is none, in which case you should put the Bubblegum program - address -- `collectionMint` - the mint account for the collection NFT -- `collectionMetadata` - the metadata account for the collection NFT -- `editionAccount` - the master edition account of the collection NFT -- `compressionProgram` - the compression program to use; this should be the - address of the SPL State Compression program unless you have some other custom - implementation -- `logWrapper` - the program to use to expose the data to indexers through logs; - this should be the address of the SPL Noop program unless you have some other - custom implementation -- `bubblegumSigner` - a PDA used by the Bubblegrum program to handle collection - verification -- `tokenMetadataProgram` - the token metadata program that was used for the - collection NFT; this is usually always the Metaplex Token Metadata program - -Minting without a collection requires fewer accounts, none of which are -exclusive to minting without a collection. You can take a look at the example -below. - ```typescript -const mintWithoutCollectionInstruction = createMintV1Instruction( - { - payer: payer.publicKey, - merkleTree: treeAddress, - treeAuthority, - treeDelegate: payer.publicKey, - leafOwner: destination, - leafDelegate: destination, - compressionProgram: SPL_ACCOUNT_COMPRESSION_PROGRAM_ID, - logWrapper: SPL_NOOP_PROGRAM_ID, - }, - { - message: nftMetadata, +await mintToCollectionV1(umi, { + leafOwner, + merkleTree, + collectionMint, + metadata: { + name: "My Compressed NFT", + uri: "https://example.com/my-cnft.json", + sellerFeeBasisPoints: 0, // 0% + collection: { key: collectionMint, verified: false }, + creators: [ + { address: umi.identity.publicKey, verified: false, share: 100 }, + ], }, -); +}).sendAndConfirm(umi); ``` +Both functions require you to pass the NFT metadata and the accounts required to +mint the NFT `leafOwner`, `merkleTree`, but the mintToCollectionV1 requires an +addition collectionMint account which is the mint address of the Collection NFT +to which the cNFT will be part. + ### Interact with cNFTs It’s important to note that cNFTs _are not_ SPL tokens. That means your code @@ -529,12 +496,15 @@ The simplest way to fetch data from an existing cNFT is to use the Read API, you’ll need to use a supporting RPC Provider. Metaplex maintains a (likely non-exhaustive) [list of RPC providers](https://developers.metaplex.com/bubblegum/rpcs) that -support the Read API. In this lesson we’ll be using +support the Read API. + +In this lesson we’ll be using [Helius](https://docs.helius.dev/compression-and-das-api/digital-asset-standard-das-api) as they have free support for Devnet. To fetch a specific cNFT using the Read API you need to derive the cNFT’s asset -ID, from the leaf index (which you track). However, after minting cNFTs you’ll have at most two pieces of information: +ID, from the leaf index (which you track). However, after minting cNFTs you’ll +have at most two pieces of information: 1. The transaction signature 2. The leaf index (possibly) @@ -548,8 +518,8 @@ index. This is a reasonable assumption for most mints given that the minting will be controlled by your code and can be set up sequentially so that your code can -track which index is going to be used for each mint. that is, the first mint will -use index 0, the second index 1, etc. +track which index is going to be used for each mint. that is, the first mint +will use index 0, the second index 1, etc. Once you have the leaf index, you can derive the cNFT’s corresponding asset ID. When using Bubblegum, the asset ID is a PDA derived using the Bubblegum program @@ -564,32 +534,28 @@ happen and stores the cNFT metadata that was hashed and stored in the Merkle tree. This enables them to surface that data when requested. This asset id is what the indexer uses to identify the particular asset. -Heres how you can fetch the cNFT using the `getLeafAssetId` helper function from the -Bubblegum SDK, and the `getAsset` method provided by your RPC provider: +Heres how you can fetch the cNFT using the `findLeafAssetIdPda` helper function +from the Bubblegum SDK, and the `getAsset` method provided by your RPC provider: ```typescript -const assetId = await getLeafAssetId(treeAddress, new BN(leafIndex)); -const response = await fetch(process.env.RPC_URL, { - method: "POST", - headers: { "Content-Type": "application/json" }, - body: JSON.stringify({ - jsonrpc: "2.0", - id: "my-id", - method: "getAsset", - params: { - id: assetId, - }, - }), +const assetId = findLeafAssetIdPda(umi, { + merkleTree, + leafIndex: leaf.nonce, }); +``` + +Now that you have the Asset ID , you can now fetch the cNFT, we will simply use +the `getAsset` method provided by the supporting RPC provider and the dasApi +library: -const { result } = await response.json(); -console.log(JSON.stringify(result, null, 2)); +```typescript +const rpcAsset = await umi.rpc.getAsset(assetId); ``` -After fetching the cNFT using the getAsset RPC method, the returned JSON object will contain metadata that is comprehensive of what a traditional NFT’s -on and offchain metadata would look like combined. For example, you can find -the cNFT attributes at `content.metadata.attributes` or the image at -`content.files.uri`. +After fetching the cNFT using the getAsset RPC method, the returned JSON object +will contain metadata that is comprehensive of what a traditional NFT’s on and +offchain metadata would look like combined. For example, you can find the cNFT +attributes at `content.metadata.attributes` or the image at `content.files.uri`. #### Query cNFTs @@ -614,18 +580,15 @@ to learn how to use them correctly. Just as with a standard SPL token transfer, security is important. An SPL token transfer, however, makes verifying transfer authority very easy. It’s built into the SPL Token program and standard signing. A compressed token’s ownership is -harder to verify. The actual verification will happen program-side, but -your client-side code needs to provide additional information to make it -possible. - -While there is a Bubblegum `createTransferInstruction` helper function, there is -more assembly required than usual. Specifically, the Bubblegum program needs to -verify that the entirety of the cNFT’s data is what the client asserts before a -transfer can occur. The entirety of the cNFT data has been hashed and stored as -a single leaf on the Merkle tree, and the Merkle tree is simply a hash of all -the tree’s leafs and branches. Because of this, you can’t simply tell the -program what account to look at and have it compare that account’s `authority` -or `owner` field to the transaction signer. +harder to verify. The actual verification will happen program-side, but your +client-side code needs to provide additional information to make it possible. + +The Bubblegum program needs to verify that the entirety of the cNFT’s data is +what the client asserts before a transfer can occur. The entirety of the cNFT +data has been hashed and stored as a single leaf on the Merkle tree, and the +Merkle tree is simply a hash of all the tree’s leafs and branches. Because of +this, you can’t simply tell the program what account to look at and have it +compare that account’s `authority` or `owner` field to the transaction signer. Instead, you need to provide the entirety of the cNFT data and any of the Merkle tree’s proof information that isn’t stored in the canopy. That way, the program @@ -660,118 +623,33 @@ const assetDataResponse = await fetch(process.env.RPC_URL, { }); const assetData = (await assetDataResponse.json()).result; -const assetProofResponse = await fetch(process.env.RPC_URL, { - method: "POST", - headers: { "Content-Type": "application/json" }, - body: JSON.stringify({ - jsonrpc: "2.0", - id: "my-id", - method: "getAssetProof", - params: { - id: assetId, - }, - }), -}); -const assetProof = (await assetProofResponse.json()).result; -``` - -The third step is to fetch the Merkle tree account. We do this by using the `ConcurrentMerkleTreeAccount` type from -`@solana/spl-account-compression`: - -```typescript -const treePublicKey = new PublicKey(assetData.compression.tree); +import { + getAssetWithProof, + transfer, +} from "@metaplex-foundation/mpl-bubblegum"; -const treeAccount = await ConcurrentMerkleTreeAccount.fromAccountAddress( - connection, - treePublicKey, -); +const assetWithProof = await getAssetWithProof(umi, assetId); ``` -Step four is the most complex step. Using the three pieces of -information gathered, you’ll need to assemble the proof path for the cNFT’s -corresponding leaf. The proof path is represented as accounts passed to the -program instruction. The program uses each of the account addresses as proof -nodes to prove the leaf data is what you say it is. +You can use the transfer method from the `@metaplex-foundation/mpl-bubblegum` +package. The method requires two arguments: the `umi` instance and an object +containing the following fields: -The full proof is provided by the indexer as shown above in `assetProof`. -However, you can exclude the same number of tail-end accounts from the proof as -the depth of the canopy. +- `assetWithProof` - Data representing the asset and its associated Merkle + proof. +- `leafOwner` - the owner of the leaf (cNFT) in question +- `newLeafOwner` - the address of the new owner after the transfer -```typescript -const canopyDepth = treeAccount.getCanopyDepth() || 0; - -const proofPath: AccountMeta[] = assetProof.proof - .map((node: string) => ({ - pubkey: new PublicKey(node), - isSigner: false, - isWritable: false, - })) - .slice(0, assetProof.proof.length - canopyDepth); -``` - -Finally, you can assemble the transfer instruction. The instruction helper -function, `createTransferInstruction`, requires the following arguments: - -- `accounts` - a list of instruction accounts, as expected; they are as follows: - - `merkleTree` - the Merkle tree account - - `treeAuthority` - the Merkle tree authority - - `leafOwner` - the owner of the leaf (cNFT) in question - - `leafDelegate` - the delegate of the leaf (cNFT) in question; if no delegate - has been added then this should be the same as `leafOwner` - - `newLeafOwner` - the address of the new owner post-transfer - - `logWrapper` - the program to use to expose the data to indexers through - logs; this should be the address of the SPL Noop program unless you have - some other custom implementation - - `compressionProgram` - the compression program to use; this should be the - address of the SPL State Compression program unless you have some other - custom implementation - - `anchorRemainingAccounts` - this is where you add the proof path -- `args` - additional arguments required by the instruction; they are: - - `root` - the root Merkle tree node from the asset proof; this is provided by - the indexer as a string and must be converted to bytes first - - `dataHash` - the hash of the asset data retrieved from the indexer; this is - provided by the indexer as a string and must be converted to bytes first - - `creatorHash` - the hash of the cNFT creator as retrieved from the indexer; - this is provided by the indexer as a string and must be converted to bytes - first - - `nonce` - used to ensure that no two leafs have the same hash; this value - should be the same as `index` - - `index` - the index where the cNFT’s leaf is located on the Merkle tree - -An example of this is shown below. Note that the first 3 lines of code grab -additional information nested in the objects shown previously so they are ready -to go when assembling the instruction itself. +const leafOwner = new PublicKey(assetData.ownership.owner); ```typescript -const treeAuthority = treeAccount.getAuthority(); -const leafOwner = new PublicKey(assetData.ownership.owner); -const leafDelegate = assetData.ownership.delegate - ? new PublicKey(assetData.ownership.delegate) - : leafOwner; - -const transferInstruction = createTransferInstruction( - { - merkleTree: treePublicKey, - treeAuthority, - leafOwner, - leafDelegate, - newLeafOwner: receiver, - logWrapper: SPL_NOOP_PROGRAM_ID, - compressionProgram: SPL_ACCOUNT_COMPRESSION_PROGRAM_ID, - anchorRemainingAccounts: proofPath, - }, - { - root: [...new PublicKey(assetProof.root.trim()).toBytes()], - dataHash: [ - ...new PublicKey(assetData.compression.data_hash.trim()).toBytes(), - ], - creatorHash: [ - ...new PublicKey(assetData.compression.creator_hash.trim()).toBytes(), - ], - nonce: assetData.compression.leaf_id, - index: assetData.compression.leaf_id, - }, -); +import { transfer } from "@metaplex-foundation/mpl-bubblegum"; + +await transfer(umi, { + ...assetWithProof, + leafOwner: currentLeafOwner, + newLeafOwner: newLeafOwner.publicKey, +}).sendAndConfirm(umi); ``` ### Conclusion @@ -784,7 +662,8 @@ functionality, take a look at the [Bubblegum client source code](https://github.com/metaplex-foundation/mpl-bubblegum/tree/main/clients/js-solita) and leverage the helper functions it provides. -Keep in mind that compression is still new. Available tooling will improve quickly but the principles you’ve learned in this lesson will likely remain the +Keep in mind that compression is still new. Available tooling will improve +quickly but the principles you’ve learned in this lesson will likely remain the same. These principles can also be broadened to arbitrary state compression, so be sure to master them here so you’re ready for more fun stuff in future lessons! @@ -837,11 +716,10 @@ We will be writing all of our code in the `index.ts`. ### 2. Create the Merkle tree account -We’ll start by creating the Merkle tree account. Let’s wrap this in a -function that will eventually create _and_ initialize the account. We’ll put it -below our `main` function in `index.ts`. Let’s call it -`createAndInitializeTree`. For this function to work, it will need the following -parameters: +We’ll start by creating the Merkle tree account. Let’s wrap this in a function +that will eventually create _and_ initialize the account. We’ll put it below in +our `index.ts`. Let’s call it `createAndInitializeTree`. For this function to +work, it will need the following parameters: - `connection` - a `Connection` to use for interacting with the network. - `payer` - a `Keypair` that will pay for transactions. @@ -849,10 +727,6 @@ parameters: `@solana/spl-account-compression`. It’s a simple object with properties `maxDepth` and `maxBufferSize` that enforces a valid combination of the two values. -- `canopyDepth` - a number for the canopy depth In the body of the function, - we’ll generate a new address for the tree, then create the instruction for - allocating a new Merkle tree account by calling `createAllocTreeIx` from - `@solana/spl-account-compression`. ```typescript async function createAndInitializeTree( @@ -862,18 +736,10 @@ async function createAndInitializeTree( canopyDepth: number, ) { const treeKeypair = Keypair.generate(); - - const allocTreeInstruction = await createAllocTreeIx( - connection, - treeKeypair.publicKey, - payer.publicKey, - maxDepthSizePair, - canopyDepth, - ); } ``` -### 3. Use Bubblegum to initialize the Merkle tree and create the tree config account +### 2. Use Bubblegum to initialize the Merkle tree and create the tree config account Now that the instruction for creating the tree is ready, we can create an instruction for invoking `create_tree` on the Bubblegum program. This will @@ -889,9 +755,6 @@ This instruction needs us to provide the following: - `payer` - the transaction fee payer - `treeCreator` - the address of the tree creator; we’ll make this the same as `payer` - - `logWrapper` - make this the `SPL_NOOP_PROGRAM_ID` - - `compressionProgram` - make this the `SPL_ACCOUNT_COMPRESSION_PROGRAM_ID` -- `args` - a list of instruction arguments; this includes: - `maxBufferSize` - the buffer size from our function’s `maxDepthSizePair` parameter - `maxDepth` - the max depth from our function’s `maxDepthSizePair` parameter @@ -904,106 +767,73 @@ transaction. Keep in mind that the transaction needs to be signed by both the ```typescript async function createAndInitializeTree( - connection: Connection, payer: Keypair, maxDepthSizePair: ValidDepthSizePair, - canopyDepth: number, ) { - const treeKeypair = Keypair.generate(); - - const allocTreeInstruction = await createAllocTreeIx( - connection, - treeKeypair.publicKey, - payer.publicKey, - maxDepthSizePair, - canopyDepth, - ); + const merkleTree = generateSigner(umi); - const [treeAuthority, _bump] = PublicKey.findProgramAddressSync( - [treeKeypair.publicKey.toBuffer()], - BUBBLEGUM_PROGRAM_ID, - ); + const builder = await createTree(umi, { + merkleTree, + maxDepth: maxDepthSizePair.maxDepth, // Max depth of the tree, + maxBufferSize: maxDepthSizePair.maxBufferSize, // Max buffer size, + public: false, // Set to false to restrict minting to the tree creator/delegate + }); - const createTreeInstruction = createCreateTreeInstruction( - { - treeAuthority, - merkleTree: treeKeypair.publicKey, - payer: payer.publicKey, - treeCreator: payer.publicKey, - logWrapper: SPL_NOOP_PROGRAM_ID, - compressionProgram: SPL_ACCOUNT_COMPRESSION_PROGRAM_ID, - }, - { - maxBufferSize: maxDepthSizePair.maxBufferSize, - maxDepth: maxDepthSizePair.maxDepth, - public: false, - }, - ); + builder.sendAndConfirm(umi); - const transaction = new Transaction().add(allocTreeInstruction, createTreeInstruction); - tx.feePayer = payer.publicKey; + const merkleTreeAddress = merkleTree.publicKey; try { - const transactionSignature = await sendAndConfirmTransaction( - connection, - transaction, - [treeKeypair, payer], - { - commitment: "confirmed", - skipPreflight: true, - }, + const explorerLink = getExplorerLink( + "transaction", + merkleTreeAddress, + "devnet", ); - const explorerLink = getExplorerLink("transaction",transactionSignature, "devnet") - - console.log(`Transaction submitted: ${explorerLink}`) + console.log(`Transaction submitted: ${explorerLink}`); + console.log("Tree Address:", merkleTreeAddress); - console.log("Tree Address:", treeKeypair.publicKey.toBase58()); - - return treeKeypair.publicKey; - } catch (err: any) { - console.error("\nFailed to create Merkle tree:", err); - throw err; + return merkleTreeAddress; + } catch (error: any) { + console.error("\nFailed to create merkle tree:", error); + throw error; } } ``` -To test what you have so far, call -`createAndInitializeTree` from `main` and provide small values for the max depth -and max buffer size. +To test what you have so far, call `createAndInitializeTree` and provide small +values for the max depth and max buffer size. ```typescript - function main() { - const connection = new Connection(clusterApiUrl("devnet"), "confirmed"); - const wallet = await getOrCreateKeypair("Wallet_1"); - await airdropSolIfNeeded(wallet.publicKey); +const connection = new Connection(clusterApiUrl("devnet"), "confirmed"); +const wallet = await getOrCreateKeypair("Wallet1"); +await airdropSolIfNeeded(wallet.publicKey); - const maxDepthSizePair: ValidDepthSizePair = { - maxDepth: 3, - maxBufferSize: 8, - }; +const maxDepthSizePair: ValidDepthSizePair = { + maxDepth: 3, + maxBufferSize: 8, +}; - const canopyDepth = 0; +const canopyDepth = 0; - const treeAddress = await createAndInitializeTree( - connection, - wallet, - maxDepthSizePair, - canopyDepth, - ); -} +const treeAddress = await createAndInitializeTree( + connection, + wallet, + maxDepthSizePair, + canopyDepth, +); ``` -Remember that Devnet SOL is limited so if you test too many times you -might run out of Devnet SOL before we get to minting. To test, in your terminal -run the following: +Remember that Devnet SOL is limited so if you test too many times you might run +out of Devnet SOL before we get to minting. To test, in your terminal run the +following: `npm run start` #### 4. Mint cNFTs to your tree -Believe it or not, that’s all it takes to set up your tree to compressed -NFTs! Now let’s focus on the minting process. +Believe it or not, that’s all it takes to set up your tree to compressed NFTs! +Now let’s focus on the minting process. First, let’s declare a function called `mintCompressedNftToCollection`. It will need the following parameters: @@ -1036,143 +866,57 @@ them real quick: - `payer` - the account that will pay for the transaction fees, rent, etc. - `merkleTree` - the Merkle tree account -- `treeAuthority` - the tree authority; should be the same PDA you derived - previously -- `treeDelegate` - the tree delegate; this is usually the same as the tree - creator -- `leafOwner` - the desired owner of the compressed NFT being minted -- `leafDelegate` - the desired delegate of the compressed NFT being minted; this - is usually the same as the leaf owner -- `collectionAuthority` - the authority of the collection NFT -- `collectionAuthorityRecordPda` - optional collection authority record PDA; - there typically is none, in which case you should put the Bubblegum program - address -- `collectionMint` - the mint account for the collection NFT -- `collectionMetadata` - the metadata account for the collection NFT -- `editionAccount` - the master edition account of the collection NFT -- `compressionProgram` - the compression program to use; this should be the - address of the SPL State Compression program unless you have some other custom - implementation -- `logWrapper` - the program to use to expose the data to indexers through logs; - this should be the address of the SPL Noop program unless you have some other - custom implementation -- `bubblegumSigner` - a PDA used by the Bubblegrum program to handle collection - verification -- `tokenMetadataProgram` - the token metadata program that was used for the - collection NFT; this is usually always the Metaplex Token Metadata program - -When you put it all together, this is what it’ll look like: +- `treeAuthority` - the tree authority; sh When you put it all together, this is + what it’ll look like: ```typescript - function mintCompressedNftToCollection( +function mintCompressedNftToCollection( connection: Connection, payer: Keypair, treeAddress: PublicKey, collectionDetails: CollectionDetails, amount: number, ) { - // Derive the tree authority PDA ('TreeConfig' account for the tree account) - const [treeAuthority] = PublicKey.findProgramAddressSync( - [treeAddress.toBuffer()], - BUBBLEGUM_PROGRAM_ID, - ); - // Derive the Bubblegum signer, used by the Bubblegum program to handle "collection verification" // Only used for `createMintToCollectionV1` instruction const [bubblegumSigner] = PublicKey.findProgramAddressSync( [Buffer.from("collection_cpi", "utf8")], BUBBLEGUM_PROGRAM_ID, ); - - for (let i = 0; i < amount; i++) { - // Compressed NFT Metadata - const compressedNFTMetadata = createNftMetadata(payer.publicKey, i); - - // Create the instruction to "mint" the compressed NFT to the tree - const mintInstruction = createMintToCollectionV1Instruction( - { - payer: payer.publicKey, // The account that will pay for the transaction - merkleTree: treeAddress, // The address of the tree account - treeAuthority, // The authority of the tree account, should be a PDA derived from the tree account address - treeDelegate: payer.publicKey, // The delegate of the tree account, should be the same as the tree creator by default - leafOwner: payer.publicKey, // The owner of the compressed NFT being minted to the tree - leafDelegate: payer.publicKey, // The delegate of the compressed NFT being minted to the tree - collectionAuthority: payer.publicKey, // The authority of the "collection" NFT - collectionAuthorityRecordPda: BUBBLEGUM_PROGRAM_ID, // Must be the Bubblegum program id - collectionMint: collectionDetails.mint, // The mint of the "collection" NFT - collectionMetadata: collectionDetails.metadata, // The metadata of the "collection" NFT - editionAccount: collectionDetails.masterEditionAccount, // The master edition of the "collection" NFT - compressionProgram: SPL_ACCOUNT_COMPRESSION_PROGRAM_ID, - logWrapper: SPL_NOOP_PROGRAM_ID, - bubblegumSigner, - tokenMetadataProgram: TOKEN_METADATA_PROGRAM_ID, - }, - { - metadataArgs: Object.assign(compressedNFTMetadata, { - collection: { key: collectionDetails.mint, verified: false }, - }), - }, - ); - - try { - // Create new transaction and add the instruction - const instruction = new Transaction().add(mintInstruction); - - // Set the fee payer for the transaction - transaction.feePayer = payer.publicKey; - - // Send the transaction - const transactionSignature = await sendAndConfirmTransaction( - connection, - instruction, - [payer], - { commitment: "confirmed", skipPreflight: true }, - ); - - const explorerLink = getExplorerLink("transaction",transactionSignature, "devnet") - - console.log(`Transaction submitted: ${explorerLink}`) - } catch (error) { - console.error("\nFailed to mint compressed NFT:", error); - throw error; - } - } } ``` -This is a great point to test with a small tree. Simply update `main` to call -`getOrCreateCollectionNFT` then `mintCompressedNftToCollection`: +This is a great point to test with a small tree. Simply update `index.ts` to +call `getOrCreateCollectionNFT` then `mintCompressedNftToCollection`: ```typescript -function main() { - const connection = new Connection(clusterApiUrl("devnet"), "confirmed"); - const wallet = await getOrCreateKeypair("Wallet_1"); - await airdropSolIfNeeded(wallet.publicKey); +const connection = new Connection(clusterApiUrl("devnet"), "confirmed"); +const wallet = await getOrCreateKeypair("Wallet1"); +await airdropSolIfNeeded(wallet.publicKey); - const maxDepthSizePair: ValidDepthSizePair = { - maxDepth: 3, - maxBufferSize: 8, - }; +const maxDepthSizePair: ValidDepthSizePair = { + maxDepth: 3, + maxBufferSize: 8, +}; - const canopyDepth = 0; +const canopyDepth = 0; - const treeAddress = await createAndInitializeTree( - connection, - wallet, - maxDepthSizePair, - canopyDepth, - ); +const treeAddress = await createAndInitializeTree( + connection, + wallet, + maxDepthSizePair, + canopyDepth, +); - const collectionNft = await getOrCreateCollectionNFT(connection, wallet); +const collectionNft = await getOrCreateCollectionNFT(connection, wallet); - await mintCompressedNftToCollection( - connection, - wallet, - treeAddress, - collectionNft, - 2 ** maxDepthSizePair.maxDepth, - ); -} +await mintCompressedNftToCollection( + connection, + wallet, + treeAddress, + collectionNft, + 2 ** maxDepthSizePair.maxDepth, +); ``` Again, to run, in your terminal type: `npm run start` @@ -1187,10 +931,10 @@ accurate but is useless in conveying what the information is. Let’s start by declaring a function `logNftDetails` that takes two parameters `treeAddress` and `nftsMinted`. -Since theres no direct identifier of any kind that points -to our cNFT, we need to know the leaf index that was used when -we minted our cNFT. We can then use that to derive the asset ID used by the Read -API and then use the Read API to fetch our cNFT data. +Since theres no direct identifier of any kind that points to our cNFT, we need +to know the leaf index that was used when we minted our cNFT. We can then use +that to derive the asset ID used by the Read API and then use the Read API to +fetch our cNFT data. In our case, we created a non-public tree and minted 8 cNFTs, so we know that the leaf indexes used were 0-7. With this, we can use the `getLeafAssetId` @@ -1235,15 +979,15 @@ function logNftDetails(treeAddress: PublicKey, nftsMinted: number) { } ``` -Helius monitors transaction logs in real time and stores the NFT -metadata that was hashed and stored in the Merkle tree. This enables them to -display that data when requested. +Helius monitors transaction logs in real time and stores the NFT metadata that +was hashed and stored in the Merkle tree. This enables them to display that data +when requested. If we add a call to this function at the end of `main` and re-run your script, -the data we get back in the console is very detailed. It includes all of -the data you’d expect in both the onchain and offchain portion of a traditional -NFT. You can find the cNFT’s attributes, files, ownership and creator -information, and more. +the data we get back in the console is very detailed. It includes all of the +data you’d expect in both the onchain and offchain portion of a traditional NFT. +You can find the cNFT’s attributes, files, ownership and creator information, +and more. ```json { @@ -1344,14 +1088,14 @@ to see what’s available. #### 6. Transfer a cNFT The last thing we’re going to add to our script is a cNFT transfer. Just as with -a standard SPL token transfer, security is important. Unlike regular standard SPL -token transfer, however, to build a secure transfer with state compression of -any kind, the program performing the transfer needs the entire asset data. +a standard SPL token transfer, security is important. Unlike regular standard +SPL token transfer, however, to build a secure transfer with state compression +of any kind, the program performing the transfer needs the entire asset data. The program, Bubblegum in this case, needs to be provided with the entire data that was hashed and stored on the corresponding leaf _and_ needs to be given the -“proof path” for the leaf in question. That makes cNFT transfers a bit more complicated -than SPL token transfers. +“proof path” for the leaf in question. That makes cNFT transfers a bit more +complicated than SPL token transfers. Remember, the general steps are: @@ -1372,7 +1116,7 @@ Inside that function, let’s fetch the asset data again then also fetch the ass proof. For good measure, let’s wrap everything in a `try catch`. ```typescript - function transferNft( +function transferNft( connection: Connection, assetId: PublicKey, sender: Keypair, @@ -1490,72 +1234,8 @@ function transferNft( }), }); const assetProof = (await assetProofResponse.json()).result; - - const treePublicKey = new PublicKey(assetData.compression.tree); - - const treeAccount = await ConcurrentMerkleTreeAccount.fromAccountAddress( - connection, - treePublicKey, - ); - - const canopyDepth = treeAccount.getCanopyDepth() || 0; - - const proofPath: AccountMeta[] = assetProof.proof - .map((node: string) => ({ - pubkey: new PublicKey(node), - isSigner: false, - isWritable: false, - })) - .slice(0, assetProof.proof.length - canopyDepth); - - const treeAuthority = treeAccount.getAuthority(); - const leafOwner = new PublicKey(assetData.ownership.owner); - const leafDelegate = assetData.ownership.delegate - ? new PublicKey(assetData.ownership.delegate) - : leafOwner; - - const transferInstruction = createTransferInstruction( - { - merkleTree: treePublicKey, - treeAuthority, - leafOwner, - leafDelegate, - newLeafOwner: receiver, - logWrapper: SPL_NOOP_PROGRAM_ID, - compressionProgram: SPL_ACCOUNT_COMPRESSION_PROGRAM_ID, - anchorRemainingAccounts: proofPath, - }, - { - root: [...new PublicKey(assetProof.root.trim()).toBytes()], - dataHash: [ - ...new PublicKey(assetData.compression.data_hash.trim()).toBytes(), - ], - creatorHash: [ - ...new PublicKey(assetData.compression.creator_hash.trim()).toBytes(), - ], - nonce: assetData.compression.leaf_id, - index: assetData.compression.leaf_id, - }, - ); - - const transaction = new Transaction().add(transferInstruction); - transaction.feePayer = sender.publicKey; - const txSignature = await sendAndConfirmTransaction( - connection, - transaction, - [sender], - { - commitment: "confirmed", - skipPreflight: true, - }, - ); - const explorerLink = getExplorerLink("transaction",transactionId, "devnet") - - console.log(`Transaction submitted: ${explorerLink}`) - } catch (err: any) { - console.error("\nFailed to transfer nft:", err); - throw err; } + } ``` @@ -1566,47 +1246,45 @@ entire collection using our function `logNftDetails`. You’ll note that the NFT at index zero will now belong to our new wallet in the `ownership` field. ```typescript -function main() { - const connection = new Connection(clusterApiUrl("devnet"), "confirmed"); - const wallet = await getOrCreateKeypair("Wallet_1"); - await airdropSolIfNeeded(wallet.publicKey); +const connection = new Connection(clusterApiUrl("devnet"), "confirmed"); +const wallet = await getOrCreateKeypair("Wallet1"); +await airdropSolIfNeeded(wallet.publicKey); - const maxDepthSizePair: ValidDepthSizePair = { - maxDepth: 3, - maxBufferSize: 8, - }; +const maxDepthSizePair: ValidDepthSizePair = { + maxDepth: 3, + maxBufferSize: 8, +}; - const canopyDepth = 0; +const canopyDepth = 0; - const treeAddress = await createAndInitializeTree( - connection, - wallet, - maxDepthSizePair, - canopyDepth, - ); +const treeAddress = await createAndInitializeTree( + connection, + wallet, + maxDepthSizePair, + canopyDepth, +); - const collectionNft = await getOrCreateCollectionNFT(connection, wallet); +const collectionNft = await getOrCreateCollectionNFT(connection, wallet); - await mintCompressedNftToCollection( - connection, - wallet, - treeAddress, - collectionNft, - 2 ** maxDepthSizePair.maxDepth, - ); +await mintCompressedNftToCollection( + connection, + wallet, + treeAddress, + collectionNft, + 2 ** maxDepthSizePair.maxDepth, +); - const recieverWallet = await getOrCreateKeypair("Wallet_2"); - const assetId = await getLeafAssetId(treeAddress, new BN(0)); - await airdropSolIfNeeded(recieverWallet.publicKey); +const recieverWallet = await getOrCreateKeypair("Wallet2"); +const assetId = await getLeafAssetId(treeAddress, new BN(0)); +await airdropSolIfNeeded(recieverWallet.publicKey); - console.log( - `Transfering ${assetId.toString()} from ${wallet.publicKey.toString()} to ${recieverWallet.publicKey.toString()}`, - ); +console.log( + `Transfering ${assetId.toString()} from ${wallet.publicKey.toString()} to ${recieverWallet.publicKey.toString()}`, +); - await transferNft(connection, assetId, wallet, recieverWallet.publicKey); +await transferNft(connection, assetId, wallet, recieverWallet.publicKey); - await logNftDetails(treeAddress, 8); -} +await logNftDetails(treeAddress, 8); ``` Go ahead and run your script. The whole thing should execute without failing, @@ -1627,8 +1305,8 @@ take a look at the solution code on the `solution` branch of the ### Challenge -It’s your turn to apply these concepts on your own! We’re not going to -gove you detailed instructions at this point, but here are some ideas: +It’s your turn to apply these concepts on your own! We’re not going to give you +detailed instructions at this point, but here are some ideas: 1. Create your own production cNFT collection 2. Build a UI for this lesson’s lab that will let you mint a cNFT and display it From 2a7533477064dec39d042c424e5b3e85af6054cc Mon Sep 17 00:00:00 2001 From: adpthegreat Date: Tue, 1 Oct 2024 21:14:39 +0100 Subject: [PATCH 3/6] - Followed guidelines in contributing.md - Refactored it to use newest dependencies - Replaced @metaplex- foundation/js with @metaplex-foundation-umi - Replaced old methods with their corresponding umi methods - Replaced all old code snippets - Gave clearer variable names - Removed redundant code - Ran prettier - Refactored codebase to be more modular --- .../state-compression/compressed-nfts.md | 570 +++++++----------- 1 file changed, 230 insertions(+), 340 deletions(-) diff --git a/content/courses/state-compression/compressed-nfts.md b/content/courses/state-compression/compressed-nfts.md index b5b9d5158..abc0d0d7c 100644 --- a/content/courses/state-compression/compressed-nfts.md +++ b/content/courses/state-compression/compressed-nfts.md @@ -2,8 +2,8 @@ title: Compressed NFTs objectives: - Create a compressed NFT collection using Metaplex’s Bubblegum program - - Mint compressed NFTs using the Bubblegum TS SDK - - Transfer compressed NFTs using the Bubblegum TS SDK + - Mint compressed NFTs using the Bubblegum program + - Transfer compressed NFTs using the Bubblegum program - Read compressed NFT data using the Read API description: "How to mint, transfer and read large-scale NFT collections using Metaplex's @@ -247,6 +247,7 @@ framework for building and using JavaScript clients for Solana onchain programs that was created by Metaplex. Note that Umi provides distinct implementations for many components compared to web3.js, such as Keypairs, PublicKeys, and Connections, but converting from web3.js versions to Umi equivalents is simple. + To begin, we first need to initialize an Umi instance. ```typescript @@ -256,7 +257,11 @@ import { clusterApiUrl } from "@solana/web3.js"; const umi = createUmi(clusterApiUrl("devnet")); ``` -Next we have to attach a signer to our Umi instance +The above code initializes an empty Umi instance and connects to the devnet +cluster with no signer or plugin attached. + +After we have done this we will add in imports then attach a signer to the Umi +instance ```typescript import { clusterApiUrl } from "@solana/web3.js"; @@ -411,7 +416,7 @@ import { In the code above, we define an object of type `ValidDepthSizePair` from the `@solana/spl-account-compression` program, setting maxDepth to 3 and -maxBufferSize to 8 to define valid liimits for the Merkle tree. We then generate +maxBufferSize to 8 to define valid limits for the Merkle tree. We then generate a merkleTree signer using `generateSigner` with the umi instance, after we've done this, we now invoke `createTree`, passing the umi instance, the merkleTree signer, and the parameters from maxDepthSizePair to configure the tree's maximum @@ -442,7 +447,7 @@ config account initialized, its time to mint cNFTs to the tree, we use `mintV1` or `mintToCollectionV1` from the `@metaplex-foundation/mpl-bubblegum` package, depending on whether we want the minted cNFT to be part of a collection. -1. mintV1 +### 1. mintV1 ```typescript await mintV1(umi, { @@ -460,6 +465,8 @@ await mintV1(umi, { }).sendAndConfirm(umi); ``` +### 2. mintToCollectionV1 + ```typescript await mintToCollectionV1(umi, { leafOwner, @@ -479,8 +486,8 @@ await mintToCollectionV1(umi, { Both functions require you to pass the NFT metadata and the accounts required to mint the NFT `leafOwner`, `merkleTree`, but the mintToCollectionV1 requires an -addition collectionMint account which is the mint address of the Collection NFT -to which the cNFT will be part. +additional collectionMint account which is the mint address of the Collection +NFT to which the cNFT will be part. ### Interact with cNFTs @@ -609,31 +616,21 @@ the `getAsset` and `getAssetProof` methods to fetch the asset data and proof, respectively. ```typescript -const assetDataResponse = await fetch(process.env.RPC_URL, { - method: "POST", - headers: { "Content-Type": "application/json" }, - body: JSON.stringify({ - jsonrpc: "2.0", - id: "my-id", - method: "getAsset", - params: { - id: assetId, - }, - }), -}); -const assetData = (await assetDataResponse.json()).result; +import { publicKey } from "@metaplex-foundation/umi"; +import { createUmi } from "@metaplex-foundation/umi-bundle-defaults"; +import { dasApi } from "@metaplex-foundation/digital-asset-standard-api"; -import { - getAssetWithProof, - transfer, -} from "@metaplex-foundation/mpl-bubblegum"; +const umi = createUmi("").use(dasApi()); +const assetId = publicKey("8TrvJBRa6Pzb9BDadqroHhWTHxaxK8Ws8r91oZ2jxaVV"); + +const asset = await umi.rpc.getAsset(assetId); -const assetWithProof = await getAssetWithProof(umi, assetId); +console.log(asset); ``` -You can use the transfer method from the `@metaplex-foundation/mpl-bubblegum` -package. The method requires two arguments: the `umi` instance and an object -containing the following fields: +Then we can use the transfer method from the +`@metaplex-foundation/mpl-bubblegum` package. This method requires two +arguments: the `umi` instance and an object containing the following fields: - `assetWithProof` - Data representing the asset and its associated Merkle proof. @@ -689,7 +686,7 @@ Take some time to familiarize yourself with the starter code provided. Most important are the helper functions provided in `utils.ts` and the URIs provided in `uri.ts`. -The `uri.ts` file provides 10k URIs that you can use for the offchain portion of +The `uri.ts` file provides 1k URIs that you can use for the offchain portion of your NFT metadata. You can, of course, create your own metadata. But this lesson isn’t explicitly about preparing metadata so we’ve provided some for you. @@ -710,80 +707,55 @@ unnecessary boilerplate code. They are as follows: Finally, there’s some boilerplate in `index.ts` that calls creates a new Devnet connection, calls `getOrCreateKeypair` to initialize a “wallet,” and calls -`airdropSolIfNeeded` to fund the wallet if its balance is low. - -We will be writing all of our code in the `index.ts`. +`airdropSolIfNeeded` to fund the wallet if its balance is low all wrapped in a +function named `initializeUmi` that can be used throughout the codebase ### 2. Create the Merkle tree account We’ll start by creating the Merkle tree account. Let’s wrap this in a function -that will eventually create _and_ initialize the account. We’ll put it below in -our `index.ts`. Let’s call it `createAndInitializeTree`. For this function to -work, it will need the following parameters: +that will eventually create _and_ initialize the account. We’ll create a new +file called `create-and-initialize-tree.ts` and call our function +`createAndInitializeTree`. For this function to work, it will need the following +parameters: -- `connection` - a `Connection` to use for interacting with the network. +- `umi` - our umi instance - `payer` - a `Keypair` that will pay for transactions. - `maxDepthSizePair` - a `ValidDepthSizePair`. This type comes from `@solana/spl-account-compression`. It’s a simple object with properties `maxDepth` and `maxBufferSize` that enforces a valid combination of the two values. -```typescript -async function createAndInitializeTree( - connection: Connection, - payer: Keypair, - maxDepthSizePair: ValidDepthSizePair, - canopyDepth: number, -) { - const treeKeypair = Keypair.generate(); -} -``` - -### 2. Use Bubblegum to initialize the Merkle tree and create the tree config account - -Now that the instruction for creating the tree is ready, we can create an -instruction for invoking `create_tree` on the Bubblegum program. This will -initialize the Merkle tree account _and_ create a new tree config account on the -Bubblegum program. +This will initialize the Merkle tree account _and_ create a new tree config +account on the Bubblegum program. -This instruction needs us to provide the following: - -- `accounts` - an object of required accounts; this includes: - - `treeAuthority` - this should be a PDA derived with the Merkle tree address - and the Bubblegum program - - `merkleTree` - the address of the Merkle tree - - `payer` - the transaction fee payer - - `treeCreator` - the address of the tree creator; we’ll make this the same as - `payer` - - `maxBufferSize` - the buffer size from our function’s `maxDepthSizePair` - parameter - - `maxDepth` - the max depth from our function’s `maxDepthSizePair` parameter - - `public` - whether or no the tree should be public; we’ll set this to - `false` +```typescript +import * as fs from "fs"; +import dotenv from "dotenv"; +import { createTree } from "@metaplex-foundation/mpl-bubblegum"; +import { generateSigner, publicKey } from "@metaplex-foundation/umi"; +import { getExplorerLink } from "@solana-developers/helpers"; +import { ValidDepthSizePair } from "@solana/spl-account-compression"; +import { initializeUmi } from "."; -Finally, we can add both instructions to a transaction and submit the -transaction. Keep in mind that the transaction needs to be signed by both the -`payer` and the `treeKeypair`. +const umi = await initializeUmi(); -```typescript -async function createAndInitializeTree( - payer: Keypair, +export async function createAndInitializeTree( + umi: Umi, maxDepthSizePair: ValidDepthSizePair, ) { - const merkleTree = generateSigner(umi); - - const builder = await createTree(umi, { - merkleTree, - maxDepth: maxDepthSizePair.maxDepth, // Max depth of the tree, - maxBufferSize: maxDepthSizePair.maxBufferSize, // Max buffer size, - public: false, // Set to false to restrict minting to the tree creator/delegate - }); + try { + const merkleTree = generateSigner(umi); + const builder = await createTree(umi, { + merkleTree, + maxDepth: maxDepthSizePair.maxDepth, // Max depth of the tree, + maxBufferSize: maxDepthSizePair.maxBufferSize, // Max buffer size, + public: false, // Set to false to restrict minting to the tree creator/delegate + }); - builder.sendAndConfirm(umi); + builder.sendAndConfirm(umi); - const merkleTreeAddress = merkleTree.publicKey; + const merkleTreeAddress = merkleTree.publicKey; - try { const explorerLink = getExplorerLink( "transaction", merkleTreeAddress, @@ -792,134 +764,158 @@ async function createAndInitializeTree( console.log(`Transaction submitted: ${explorerLink}`); console.log("Tree Address:", merkleTreeAddress); + console.log("Created Merkle Tree Successfully ✅"); + + fs.appendFileSync(".env", `\nMERKLE_TREE_ADDRESS=${merkleTreeAddress}`); return merkleTreeAddress; } catch (error: any) { - console.error("\nFailed to create merkle tree:", error); + console.error("\nFailed to create merkle tree:", error, "❌"); throw error; } } ``` -To test what you have so far, call `createAndInitializeTree` and provide small -values for the max depth and max buffer size. +Then you now call `createAndInitializeTree` and provide small values for the max +depth and max buffer size. ```typescript -const connection = new Connection(clusterApiUrl("devnet"), "confirmed"); -const wallet = await getOrCreateKeypair("Wallet1"); -await airdropSolIfNeeded(wallet.publicKey); - -const maxDepthSizePair: ValidDepthSizePair = { - maxDepth: 3, - maxBufferSize: 8, +export const maxDepthSizePair: ValidDepthSizePair = { + maxDepth: 14, + maxBufferSize: 64, }; -const canopyDepth = 0; +export async function createAndInitializeTree( + umi:Umi, + maxDepthSizePair: ValidDepthSizePair) { + ... +} + +createAndInitializeTree(maxDepthSizePair); -const treeAddress = await createAndInitializeTree( - connection, - wallet, - maxDepthSizePair, - canopyDepth, -); ``` -Remember that Devnet SOL is limited so if you test too many times you might run -out of Devnet SOL before we get to minting. To test, in your terminal run the -following: +To test, run the command in your terminal: + +`npx esrun create-and-initialize-tree.ts` + +Your output should look like this + +```typescript + +``` -`npm run start` +in the body of the createAndInitializeTree function we have a line of code + +```typescript +fs.appendFileSync(".env", `\nMERKLE_TREE_ADDRESS=${merkleTreeAddress}`); +``` + +this creates a variable called `MERKLE_TREE_ADDRESS` and appends the +merkleTreeAddress we just initialized to our .env file so we do not need to +bother remembering the merkleTreeAddress, in subsequent steps, we just load our +merkleTreeAddress from the .env file. #### 4. Mint cNFTs to your tree Believe it or not, that’s all it takes to set up your tree to compressed NFTs! Now let’s focus on the minting process. -First, let’s declare a function called `mintCompressedNftToCollection`. It will -need the following parameters: +First, let’s create a new file called `mint-compressed-nfts.ts` and declare a +function called `mintCompressedNftToCollection`. It will need the following +parameters: -- `connection` - a `Connection` to use for interacting with the network. -- `payer` - a `Keypair` that will pay for transactions. -- `treeAddress` - the Merkle tree’s address +- `payer` - a `Publickey` that will pay for transactions, fees, rent, etc. - `collectionDetails` - the details of the collection as type `CollectionDetails` from `utils.ts` - `amount` - the number of cNFTs to mint The body of this function will do the following: -1. Derive the tree authority just like before. Again, this is a PDA derived from - the Merkle tree address and the Bubblegum program. -2. Derive the `bubblegumSigner`. This is a PDA derived from the string - `"collection_cpi"` and the Bubblegum program and is essential for minting to - a collection. -3. Create the cNFT metadata by calling `createNftMetadata` from our `utils.ts` +1. Create the cNFT metadata by calling `createNftMetadata` from our `utils.ts` file. -4. Create the mint instruction by calling `createMintToCollectionV1Instruction` - from the Bubblegum SDK. -5. Build and send a transaction with the mint instruction -6. Repeat steps 3-6 `amount` number of times - -The `createMintToCollectionV1Instruction` takes two arguments: `accounts` and -`args`. The latter is simply the NFT metadata. As with all complex instructions, -the primary hurdle is knowing which accounts to provide. So let’s go through -them real quick: +2. Create the mint instruction by calling `mintToCollectionV1` from the + Bubblegum SDK. +3. Build and send a transaction with the mint instruction +4. Repeat steps 1-4 `amount` number of times -- `payer` - the account that will pay for the transaction fees, rent, etc. -- `merkleTree` - the Merkle tree account -- `treeAuthority` - the tree authority; sh When you put it all together, this is - what it’ll look like: +This is what it’ll look like: ```typescript -function mintCompressedNftToCollection( - connection: Connection, - payer: Keypair, - treeAddress: PublicKey, +import dotenv from "dotenv"; +import { mintToCollectionV1 } from "@metaplex-foundation/mpl-bubblegum"; +import { CollectionDetails } from "@metaplex-foundation/mpl-token-metadata"; +import { + base58, + Keypair, + publicKey, + PublicKey, +} from "@metaplex-foundation/umi"; +import { getExplorerLink } from "@solana-developers/helpers"; +import { createNftMetadata, getOrCreateCollectionNFT } from "./utils"; +import { initializeUmi } from "."; +import { maxDepthSizePair } from "./create-and-initialize-tree"; + +const umi = await initializeUmi(); + +export async function mintCompressedNftToCollection( + payer: Publickey, collectionDetails: CollectionDetails, amount: number, ) { - // Derive the Bubblegum signer, used by the Bubblegum program to handle "collection verification" - // Only used for `createMintToCollectionV1` instruction - const [bubblegumSigner] = PublicKey.findProgramAddressSync( - [Buffer.from("collection_cpi", "utf8")], - BUBBLEGUM_PROGRAM_ID, - ); + if (!process.env.MERKLE_TREE_ADDRESS) { + throw new Error("No MERKLE_TREE_ADDRESS found"); + } + const treeAddress = process.env["MERKLE_TREE_ADDRESS"]; + const mintAddress = collectionDetails.mint; + for (let i = 0; i < amount; i++) { + const compressedNFTMetadata = createNftMetadata(payer, i, mintAddress); + + const { signature } = await mintToCollectionV1(umi, { + leafOwner: payer.publicKey, + merkleTree: publicKey(treeAddress), + collectionMint: mintAddress, + metadata: compressedNFTMetadata, + }).sendAndConfirm(umi, { confirm: { commitment: "confirmed" } }); + + const transactionSignature = base58.deserialize(signature); + try { + const explorerLink = getExplorerLink( + "transaction", + transactionSignature[0], + "devnet", + ); + console.log(`Transaction submitted: ${explorerLink} '\n'`); + console.log("Address:", transactionSignature[0]); + } catch (err) { + console.error("\nFailed to mint compressed NFT: '\n'", err); + throw err; + } + } } ``` -This is a great point to test with a small tree. Simply update `index.ts` to -call `getOrCreateCollectionNFT` then `mintCompressedNftToCollection`: +Now let us mint cNFTs to our merkle tree. Simply update +`mint-compressed-nfts.ts` to call `getOrCreateCollectionNFT` then +`mintCompressedNftToCollection`: ```typescript -const connection = new Connection(clusterApiUrl("devnet"), "confirmed"); -const wallet = await getOrCreateKeypair("Wallet1"); -await airdropSolIfNeeded(wallet.publicKey); - -const maxDepthSizePair: ValidDepthSizePair = { - maxDepth: 3, - maxBufferSize: 8, -}; - -const canopyDepth = 0; - -const treeAddress = await createAndInitializeTree( - connection, - wallet, - maxDepthSizePair, - canopyDepth, -); - -const collectionNft = await getOrCreateCollectionNFT(connection, wallet); +const collectionNft = await getOrCreateCollectionNFT(umi); await mintCompressedNftToCollection( - connection, - wallet, - treeAddress, + umi.identity.publicKey, collectionNft, 2 ** maxDepthSizePair.maxDepth, ); ``` -Again, to run, in your terminal type: `npm run start` +To run, in your terminal type: `npx esrun mint-compressed-nfts.ts` + +Your output should look like this + +```typescript + +``` #### 5. Read existing cNFT data @@ -928,8 +924,8 @@ their data. This is tricky because the onchain data is just the Merkle tree account, the data from which can be used to verify existing information as accurate but is useless in conveying what the information is. -Let’s start by declaring a function `logNftDetails` that takes two parameters -`treeAddress` and `nftsMinted`. +Let’s start by creating a new file called `log-nft-details.ts` declaring a +function `logNftDetails` that takes a parameter `nftsMinted`. Since theres no direct identifier of any kind that points to our cNFT, we need to know the leaf index that was used when we minted our cNFT. We can then use @@ -953,28 +949,22 @@ a free API Key from [the Helius website](https://dev.helius.xyz/). Then add your RPC_URL=https://devnet.helius-rpc.com/?api-key=YOUR_API_KEY ``` -Then simply issue a POST request to your provided RPC URL and put the `getAsset` -information in the body: +Then simply call the getAsset method and pass the `assetId` as a parameter ```typescript function logNftDetails(treeAddress: PublicKey, nftsMinted: number) { + if (!process.env.RPC_URL) { + throw new Error("RPC_URL environment variable is not defined."); + } + if (!process.env.MERKLE_TREE_ADDRESS) { + throw new Error("No MERKLE_TREE_ADDRESS found"); + } for (let i = 0; i < nftsMinted; i++) { const assetId = await getLeafAssetId(treeAddress, new BN(i)); - console.log("Asset ID:", assetId.toBase58()); - const response = await fetch(process.env.RPC_URL, { - method: "POST", - headers: { "Content-Type": "application/json" }, - body: JSON.stringify({ - jsonrpc: "2.0", - id: "my-id", - method: "getAsset", - params: { - id: assetId, - }, - }), - }); - const { result } = await response.json(); - console.log(JSON.stringify(result, null, 2)); + console.log("Asset ID:", assetId); + const umi = createUmi(process.env.RPC_URL).use(dasApi()); + const asset = await umi.rpc.getAsset(assetId); + return asset; } } ``` @@ -983,11 +973,10 @@ Helius monitors transaction logs in real time and stores the NFT metadata that was hashed and stored in the Merkle tree. This enables them to display that data when requested. -If we add a call to this function at the end of `main` and re-run your script, -the data we get back in the console is very detailed. It includes all of the -data you’d expect in both the onchain and offchain portion of a traditional NFT. -You can find the cNFT’s attributes, files, ownership and creator information, -and more. +If you now call this function and re-run your script, the data we get back in +the console is very detailed. It includes all of the data you’d expect in both +the onchain and offchain portion of a traditional NFT. You can find the cNFT’s +attributes, files, ownership and creator information, and more. ```json { @@ -1087,156 +1076,76 @@ to see what’s available. #### 6. Transfer a cNFT -The last thing we’re going to add to our script is a cNFT transfer. Just as with -a standard SPL token transfer, security is important. Unlike regular standard -SPL token transfer, however, to build a secure transfer with state compression -of any kind, the program performing the transfer needs the entire asset data. - -The program, Bubblegum in this case, needs to be provided with the entire data -that was hashed and stored on the corresponding leaf _and_ needs to be given the -“proof path” for the leaf in question. That makes cNFT transfers a bit more -complicated than SPL token transfers. - -Remember, the general steps are: - -1. Fetch the cNFT's asset data from the indexer -2. Fetch the cNFT's proof from the indexer -3. Fetch the Merkle tree account from the Solana blockchain -4. Prepare the asset proof as a list of `AccountMeta` objects -5. Build and send the Bubblegum transfer instruction +The last thing we’re going to add to our script is a cNFT transfer. To do this, +we are going to use the Let’s start by declaring a `transferNft` function that takes the following: -- `connection` - a `Connection` object - `assetId` - a `PublicKey` object - `sender` - a `Keypair` object so we can sign the transaction - `receiver` - a `PublicKey` object representing the new owner Inside that function, let’s fetch the asset data again then also fetch the asset -proof. For good measure, let’s wrap everything in a `try catch`. +proof. For proper error handling let’s wrap everything in a `try catch`. ```typescript -function transferNft( - connection: Connection, - assetId: PublicKey, - sender: Keypair, - receiver: PublicKey, -) { - try { - const assetDataResponse = await fetch(process.env.RPC_URL, { - method: "POST", - headers: { "Content-Type": "application/json" }, - body: JSON.stringify({ - jsonrpc: "2.0", - id: "my-id", - method: "getAsset", - params: { - id: assetId, - }, - }), - }); - const assetData = (await assetDataResponse.json()).result; - - const assetProofResponse = await fetch(process.env.RPC_URL, { - method: "POST", - headers: { "Content-Type": "application/json" }, - body: JSON.stringify({ - jsonrpc: "2.0", - id: "my-id", - method: "getAssetProof", - params: { - id: assetId, - }, - }), - }); - const assetProof = (await assetProofResponse.json()).result; - } catch (error: any) { - console.error("\nFailed to transfer nft:", error); - throw error; - } -} -``` +import dotenv from "dotenv"; +import { transfer } from "@metaplex-foundation/mpl-bubblegum"; +import { + base58, + generateSigner, + publicKey, + PublicKey, +} from "@metaplex-foundation/umi"; +import { getExplorerLink } from "@solana-developers/helpers"; +import { Keypair } from "@solana/web3.js"; +import { initializeUmi } from "."; +import { getLeafAssetId } from "./get-leaf-assetId"; -Next, let’s fetch the Merkle tree account from the chain, get the canopy depth, -and assemble the proof path. We do this by mapping the asset proof we got from -Helius to a list of `AccountMeta` objects, then removing any proof nodes at the -end that are already cached onchain in the canopy. +dotenv.config(); -```typescript - function transferNft( - connection: Connection, +const umi = await initializeUmi(); + +async function transferNft( assetId: PublicKey, sender: Keypair, - receiver: PublicKey + receiver: PublicKey, ) { + if (!process.env.RPC_URL) { + throw new Error("RPC_URL environment variable is not defined."); + } try { - ... - - const treePublicKey = new PublicKey(assetData.compression.tree) + const assetWithProof = umi.rpc.getAssetWithProof(assetId); - const treeAccount = await ConcurrentMerkleTreeAccount.fromAccountAddress( - connection, - treePublicKey - ) + const { signature } = await transfer(umi, { + ...assetWithProof, + leafOwner: umi.identity.publicKey, + newLeafOwner: receiver, + }).sendAndConfirm(umi, { confirm: { commitment: "confirmed" } }); + // TO DO + const transactionSignature = base58.deserialize(signature); - const canopyDepth = treeAccount.getCanopyDepth() || 0 + const explorerLink = getExplorerLink( + "transaction", + transactionSignature.toLocaleString(), + "devnet", + ); - const proofPath: AccountMeta[] = assetProof.proof - .map((node: string) => ({ - pubkey: new PublicKey(node), - isSigner: false, - isWritable: false, - })) - .slice(0, assetProof.proof.length - canopyDepth) + console.log(`Transaction submitted: ${explorerLink}`); } catch (error: any) { - console.error("\nFailed to transfer nft:", err) - throw error + console.error("\nFailed to transfer nft:", error); + throw error; } } -``` -Finally, we build the instruction using `createTransferInstruction`, add it to a -transaction, then sign and send the transaction. This is what the entire -`transferNft` function looks like when finished: +// Transfer first cNFT to random receiver to illustrate transfers +const receiver = Keypair.generate(); -```typescript -function transferNft( - connection: Connection, - assetId: PublicKey, - sender: Keypair, - receiver: PublicKey, -) { - try { - const assetDataResponse = await fetch(process.env.RPC_URL, { - method: "POST", - headers: { "Content-Type": "application/json" }, - body: JSON.stringify({ - jsonrpc: "2.0", - id: "my-id", - method: "getAsset", - params: { - id: assetId, - }, - }), - }); - const assetData = (await assetDataResponse.json()).result; - - const assetProofResponse = await fetch(process.env.RPC_URL, { - method: "POST", - headers: { "Content-Type": "application/json" }, - body: JSON.stringify({ - jsonrpc: "2.0", - id: "my-id", - method: "getAssetProof", - params: { - id: assetId, - }, - }), - }); - const assetProof = (await assetProofResponse.json()).result; - } - -} +transferNft( + await getLeafAssetId(publicKey(treeAddress), new BN(0)), + wallet, + publicKey(receiver.publicKey), +); ``` Lets transfer our first compressed NFT at index 0 to someone else. First we’ll @@ -1246,49 +1155,30 @@ entire collection using our function `logNftDetails`. You’ll note that the NFT at index zero will now belong to our new wallet in the `ownership` field. ```typescript -const connection = new Connection(clusterApiUrl("devnet"), "confirmed"); -const wallet = await getOrCreateKeypair("Wallet1"); -await airdropSolIfNeeded(wallet.publicKey); - -const maxDepthSizePair: ValidDepthSizePair = { - maxDepth: 3, - maxBufferSize: 8, -}; - -const canopyDepth = 0; - -const treeAddress = await createAndInitializeTree( - connection, - wallet, - maxDepthSizePair, - canopyDepth, -); - const collectionNft = await getOrCreateCollectionNFT(connection, wallet); await mintCompressedNftToCollection( - connection, wallet, treeAddress, collectionNft, 2 ** maxDepthSizePair.maxDepth, ); -const recieverWallet = await getOrCreateKeypair("Wallet2"); +const receiverWallet = await getOrCreateKeypair("Wallet2"); const assetId = await getLeafAssetId(treeAddress, new BN(0)); -await airdropSolIfNeeded(recieverWallet.publicKey); +await airdropSolIfNeeded(receiverWallet.publicKey); console.log( - `Transfering ${assetId.toString()} from ${wallet.publicKey.toString()} to ${recieverWallet.publicKey.toString()}`, + `Transferring ${assetId.toString()} from ${wallet.publicKey.toString()} to ${recieverWallet.publicKey.toString()}`, ); -await transferNft(connection, assetId, wallet, recieverWallet.publicKey); +await transferNft(assetId, wallet, recieverWallet.publicKey); -await logNftDetails(treeAddress, 8); +await logNftDetails(8); ``` -Go ahead and run your script. The whole thing should execute without failing, -and all for close to 0.01 SOL! +Go ahead and run your script. Type the command The whole thing should execute +without failing, and all for close to 0.01 SOL! Congratulations! Now you know how to mint, read, and transfer cNFTs. If you wanted, you could update the max depth, max buffer size, and canopy depth to From c0b40eadef320de0ae197853e02793502b4a8033 Mon Sep 17 00:00:00 2001 From: adpthegreat Date: Tue, 1 Oct 2024 21:46:53 +0100 Subject: [PATCH 4/6] chore:ran prettier --- content/courses/state-compression/compressed-nfts.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/content/courses/state-compression/compressed-nfts.md b/content/courses/state-compression/compressed-nfts.md index f293e5b37..bedd2f6bc 100644 --- a/content/courses/state-compression/compressed-nfts.md +++ b/content/courses/state-compression/compressed-nfts.md @@ -716,8 +716,7 @@ function named `initializeUmi` that can be used throughout the codebase We’ll start by creating the Merkle tree account. Let’s wrap this in a function that will eventually create _and_ initialize the account. We’ll create a new -file called `create-and-initialize-tree.ts` and call our function -======= +file called `create-and-initialize-tree.ts` and call our function ======= `createAndInitializeTree`. For this function to work, it will need the following parameters: @@ -1080,7 +1079,8 @@ to see what's available. #### 6. Transfer a cNFT The last thing we’re going to add to our script is a cNFT transfer. To do this, -we are going to use the +we are going to use the `transfer` method from the +`@metaplex-foundation/mpl-bubblegum` library. Let's start by declaring a `transferNft` function that takes the following: @@ -1088,6 +1088,7 @@ Let's start by declaring a `transferNft` function that takes the following: - `sender` - a `Keypair` object so we can sign the transaction - `receiver` - a `PublicKey` object representing the new owner +```typescript dotenv.config(); const umi = await initializeUmi(); @@ -1108,7 +1109,6 @@ async function transferNft( leafOwner: umi.identity.publicKey, newLeafOwner: receiver, }).sendAndConfirm(umi, { confirm: { commitment: "confirmed" } }); - // TO DO const transactionSignature = base58.deserialize(signature); const explorerLink = getExplorerLink( From 745df55e67f7af9dcb49642ca4a887077db56004 Mon Sep 17 00:00:00 2001 From: adpthegreat Date: Tue, 1 Oct 2024 22:01:58 +0100 Subject: [PATCH 5/6] chore:ran prettier --- .../intro-to-solana/intro-to-custom-onchain-programs.md | 4 ++-- content/courses/solana-pay/solana-pay.md | 2 +- content/guides/getstarted/local-rust-hello-world.md | 4 ++-- docs/programs/examples.md | 5 +++-- 4 files changed, 8 insertions(+), 7 deletions(-) diff --git a/content/courses/intro-to-solana/intro-to-custom-onchain-programs.md b/content/courses/intro-to-solana/intro-to-custom-onchain-programs.md index 493fef4b8..8ad09db6e 100644 --- a/content/courses/intro-to-solana/intro-to-custom-onchain-programs.md +++ b/content/courses/intro-to-solana/intro-to-custom-onchain-programs.md @@ -17,8 +17,8 @@ invoked in the onchain program. ### Instructions -In previous lessons, we used the `SystemProgram.transfer()` function from -`@solana/web3.js`, which creates an instruction for the System program to +In previous lessons, we used the `SystemProgram.transfer()` function from +`@solana/web3.js`, which creates an instruction for the System program to transfer SOL. When working with other programs, however, you'll need to create instructions diff --git a/content/courses/solana-pay/solana-pay.md b/content/courses/solana-pay/solana-pay.md index 48d7991ab..bfacb6677 100644 --- a/content/courses/solana-pay/solana-pay.md +++ b/content/courses/solana-pay/solana-pay.md @@ -239,7 +239,7 @@ async function post(req: PublicKey, res: PublicKey) { fromPubkey: new PublicKey(account), toPubkey: Keypair.generate().publicKey, lamports: 0.001 * LAMPORTS_PER_SOL, - }); + }); transaction.add(instruction); diff --git a/content/guides/getstarted/local-rust-hello-world.md b/content/guides/getstarted/local-rust-hello-world.md index 88b4b8692..5b87bfa4e 100644 --- a/content/guides/getstarted/local-rust-hello-world.md +++ b/content/guides/getstarted/local-rust-hello-world.md @@ -228,8 +228,8 @@ library. ### Install Node.js To use node in WSL2 on Windows, please follow this -[guide to installing node in WSL2](https://learn.microsoft.com/en-us/windows/dev-environment/javascript/nodejs-on-wsl) -to install node. +[guide to installing node in WSL2](https://learn.microsoft.com/en-us/windows/dev-environment/javascript/nodejs-on-wsl) to +install node. ```shell sudo apt-get install curl diff --git a/docs/programs/examples.md b/docs/programs/examples.md index bd70d765d..6a013545e 100644 --- a/docs/programs/examples.md +++ b/docs/programs/examples.md @@ -39,8 +39,9 @@ framework), [Anchor](https://www.anchor-lang.com/docs/installation), [Seahorse](https://seahorse-lang.org/) and it also contains a list of examples that we would love to [see as contributions](https://github.com/solana-developers/program-examples?tab=readme-ov-file#examples-wed-love-to-see). -Within the repo you will find the following subfolder, each with assorted -example programs within them: +Within +the repo you will find the following subfolder, each with assorted example +programs within them: - [Basics](#basics) - [Compression](#compression) From d89893f13f4c170e5b535bb549c15df1e523cad0 Mon Sep 17 00:00:00 2001 From: adpthegreat Date: Tue, 1 Oct 2024 22:34:58 +0100 Subject: [PATCH 6/6] chore:prettier --- content/guides/getstarted/local-rust-hello-world.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/guides/getstarted/local-rust-hello-world.md b/content/guides/getstarted/local-rust-hello-world.md index 5b87bfa4e..19482e71e 100644 --- a/content/guides/getstarted/local-rust-hello-world.md +++ b/content/guides/getstarted/local-rust-hello-world.md @@ -215,8 +215,8 @@ You have successfully setup, built, and deployed a Solana program using the Rust language. -You can use the [Solana Explorer](https://explorer.solana.com/) to look at your newly deployed program. -The explorer also works on localnet, you can open the [Solana Explorer on localnet](https://explorer.solana.com/?cluster=custom) and just paste your programId in the search bar. +You can use the [Solana Explorer](https://explorer.solana.com/) to look at your newly deployed program. +The explorer also works on localnet, you can open the [Solana Explorer on localnet](https://explorer.solana.com/?cluster=custom) and just paste your programId in the search bar. ## Let's call the Hello World program