Store Data
Learn how to store data on the Filecoin network using different mechanisms that suit your project's requirements.
Last updated
Was this helpful?
Learn how to store data on the Filecoin network using different mechanisms that suit your project's requirements.
Last updated
Was this helpful?
A CAR file is a standardized format for bundling and exchanging content-addressable data. It provides a way to organize and encapsulate data, ensuring it can be easily verified and retrieved.
Before sending data to Filecoin storage providers, it is necessary to package the data into CAR (Content Addressable aRchive) files, regardless of whether you store the data via a smart contract or data onramp toolings.
To provide the data to the SP which we make storage deals with, we need to prepare data and provide the following information when making storage deals via smart contracts or aggregators.
Piece CID & Payload CID
CAR size & piece size
URL to your file
We can use the following tools to prepare your data into CAR for storage via FVM.
- powered by
CAR libraries
web3.storage/ipfs-car
or ipld/car
IPFS node Store data on the IPFS network and provide CID to Filecoin SPs to initialize storage deals.
We will explain each option available for preparing your data into CAR files and obtaining the necessary information to initialize storage deals via FVM, as there are multiple ways to accomplish this.
- recommended
Upload files, generate CAR, and get CAR links - we can do all these on the FVM Data Depot website. After logging in and uploading files following this , we will get the following information for proposing a storage deal via smart contract.
Piece CID & Payload CID
CAR size & piece size
URL to your file
using web3.storage/ipfs-car
library
Pack files using CLI
Replace the file path and output path of the file you want to pack into CAR.
Expect output same as following:
upload to IPFS Desktop
Afterward, you can obtain the CID or URL of the uploaded data to propose storage deals via FVM on the Filecoin network.
With the support of FVM, applications can leverage the decentralized nature of the smart contract to store data on Filecoin in a more decentralized way. By initiating storage deals through smart contracts on the FVM, the Client Contract (CC) FRC is utilized to propose deals to the Filecoin network. Service Providers (SPs) running Boost can actively monitor and process these deal proposals by listening for specific smart contract events.
Client Contract serves as a crucial component in making on-chain storage deal proposals on the Filecoin network. To initialize a storage deal proposal via the Client Contract smart contract, we need to first pack your data into CAR files and obtain the following information before calling the CC smart contract.
pieceCID
CarLink
car size
piece Size
starting and ending epoch
Client Contract
The Client Contract library implements the basic functions to make storage deal proposals as well as callback functions for successful storage deal creation.
One of the key methods within this library is the makeDealProposal
method, which is responsible for initiating a fully on-chain storage deal proposal. To invoke the makeDealProposal
method, you will need to interact with the deployed DealClient contract on Calibration. This method accepts the required parameters for the storage deal, such as the data CID or URL, car size, piece size, the duration of the deal, and any other relevant details specific to your use case.
Additionally, the Client Contract library provides callback functions that can be used to handle successful storage deal creation events. These callbacks allow you to perform actions or trigger subsequent processes upon the successful establishment of a storage deal.
A Javascript function to invoke the makeDealProposal
method should be like:
Filecoin is primarily designed for storing large data over extended periods. Due to economic considerations, it is generally not good for Service Providers (SPs) to accept small-scale datasets and allocate them to their 32 or 64 Gib storage sectors. As a result, it is unlikely that SPs will directly accept storage deals proposed by the client contract for small datasets.
Lighthouse.storage provides users with two options for uploading data and making storage: utilizing the Lighthouse SDK to store data or leveraging smart contracts to initiate on-chain storage deals.
store data with lighthouse SDK
By creating an account with Lighthouse storage and generating an API key, you can easily upload data to the Filecoin network using the Lighthouse SDK within any JavaScript application. Data stored using lighthouse SDK will be automatically registered for deal aggregation as well as RaaS(replication, renewal, and repair).
First, install lighthouse SDK in your project with the command npm install -g @lighthouse-web3/sdk
. Then use the following code to upload data to the lighthouse for deal aggregation.
The expected output of uploadResponse
.
store data via lighthouse smart contract
We can call the smart contract at 0x01ccBC72B2f0Ac91B79Ff7D2280d79e25f745960
and submit a CID for aggregation via submit(bytes memory _cid) external returns (uint256)
methods.
A Javascript function to invoke the submit
method should be like:
RaaS (Replication, Renewal, and Repair as a Service) refers to the service provided for data stored in storage deals on the Filecoin network. When making storage deals with deal aggregators, such as lighthouse.storage, users have the option to register the RaaS job for the stored data. Subsequently, the aggregators monitor the status of the registered storage deals and initiate the necessary actions for replication, renewal, and repair as required.
When storing data using either the Lighthouse SDK or smart contracts, we can register a RaaS job.
Lighthouse SDK: register replication, renew, and repair service by setting deal parameters when uploading data.
Lighthouse smart contract: calling submitRaaS
attaching RaaS parameters for the storage deal aggregation.
register RaaS job when uploading with lighthouse SDK
When uploading a file using the SDK, you have the flexibility to customize how it is stored in Lighthouse by adjusting the deal parameters.
num_copies: Decide how many backup copies you want for your file. The Max limit is 3. For instance, if set to 3, your file will be stored by 3 different storage providers.
repair_threshold: Determines when a storage sector is considered "broken" if a provider fails to confirm they still have your file. It's measured in "epochs", with 28800 epochs being roughly 10 days.
renew_threshold: Specifies when your storage deal should be renewed. It's also measured in epochs.
network: This should always be set to 'calibration' (for RAAS services to function) unless you want to use the mainnet.
register RaaS job when proposal storage deal using lighthouse smart contract
Another way to register RaaS jobs is by interacting with the Lighthouse smart contract and submitting a CID of your choice to the submitRaaS
function. This action creates a new deal request that will be picked up by the Lighthouse RaaS Worker, initiating the necessary replication, renewal, and repair processes.
The contract owner will deploy the contract, establishing the rules of the dataDAO.
Data pinners will add the deal CIDs intended to be incentivized to the list. This will allow storage providers to see which deals have additional incentives.
The contract should then be funded by those who want to see the CID be accepted.
Finally, the bounty is claimed by the storage providers that accepted the deal. This is done by using the MarketAPI to check the status of a deal.
ipfs-car
is a thin wrapper over and which provides a library and CLI tool to pack and unpack CAR(Content Addressable aRchives) files.
After installing ipfs-car
via NPM, we can use it as a CLI or JS library to pack your data into a CAR file. You can refer to to learn more about how to use it.
Then we can upload a.car
file to the ipfs using or IPFS desktop, and then provide the CID & URL for proposing storage deals via FVM on the Filecoin network.
Another option is to upload data to the IPFS network using an IPFS node, such as IPFS Desktop or Kubo. By following this , you can learn how to add files using IPFS Desktop.
The full tutorial of proposal storage deals through the client contract can be found .
In the case of small datasets, a more viable option is to store them with . Storage onramps combine multiple small datasets into a larger dataset and generate Proof of Deal Sub-piece Inclusion (PoDSI). PoDSI can be utilized to verify and provide evidence that the sub-piece datasets are included in a storage deal on the Filecoin network.
One of the storage onramps we can use is which is a perpetual file storage protocol that provides both on-chain and off-chain deal aggregation services. It provides a solution for storing small datasets on Filecoin while also enabling verification of deal inclusion using PoDSI. This combination of services can be valuable for ensuring the integrity and accessibility of small datasets stored on the Filecoin network.
: a JavaScript library that allows you to upload files to the Filecoin network.
: solidity contract to submit and process storage deal aggregation requests.
Lighthouse has also implemented an aggregator smart contract based on . This smart contract is deployed on the Filecoin Calibration testnet, allowing users to submit deal aggregation requests on-chain.
The full tutorial for uploading data using Lighthouse SDK and smart contract can be found .
: a JavaScript library that allows you to upload files to the Filecoin network.
: solidity contract to submit and process storage deal aggregation requests.
The also demonstrates a way to monitor the status of a Filecoin Storage Deal.
There are two sides to incentivizing data onboarding –the first is to incentivize the client to upload data, which can be done with an ERC20 token included in a DataDAO that pays to wallets that upload data through the DataDAO. The second is to incentivize the storage providers to take a deal. Both are demonstrated in the .
Note that the full solidity file for the Deal Bounty Contract can be found . This cookbook will pull relevant functions for you as a way to base your own code on.