Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
A storage model defines how data is stored within a system. This page covers the basic aspects of Filecoin’s storage model.
The Filecoin storage model consists of three main components:
Providers
Deals
Sectors
Providers offer storage and retrieval services to network users. There are two types of providers:
Storage Providers
Retrieval Providers
Storage providers, often called SPs, are responsible for storing files and data for clients on the network. They also provide cryptographic proofs to verify that data is stored securely. The majority of providers on the Filecoin network are SPs.
Retrieval providers, or RPs, specialize in delivering quick access to data rather than long-term storage. While many storage providers also offer retrieval services, stand-alone RPs are increasingly joining the network to enhance data accessibility.
In the Filecoin network, SPs and RPs offer storage or retrieval services to clients through deals. These deals are negotiated between two parties and outline terms such as data size, price, duration, and collateral.
The deal-making process initially occurs off-chain. Once both parties agree to the terms, the deal is published on-chain for network-wide visibility and validation.
Sectors are the fundamental units of provable storage where storage providers securely store client data and generate PoSt (Proof of Spacetime) for the Filecoin network. Sectors come in standard sizes, typically 32 GiB
or 64 GiB
, and have a set lifespan that providers can extend before it expires.
The storage market is the entry point where storage providers and clients negotiate and publish storage deals on-chain.
The lifecycle of a deal within the storage market includes four distinct phases:
Discovery: The client identifies potential storage providers (SPs) and requests their prices.
Negotiation: After selecting an SP, both parties agree to the terms of the deal.
Publishing: The deal is published on-chain.
Handoff: The deal is added to a sector, where the SP can provide cryptographic proofs of data storage.
Filecoin Plus aims to maximize useful storage on the Filecoin network by incentivizing the storage of meaningful and valuable data. It offers verified clients low-cost or free storage through a system called datacap, a storage quota that boosts incentives for storage providers.
Verified clients use datacap allocated by community-selected allocators to store data on the network. In exchange for storing verified deals, storage providers receive a 10x boost in storage power, which increases their block rewards as an incentive.
Datacap: A token allocated to verified clients to spend on storage deals, offering a 10x quality multiplier for deals.
Allocators: Community-selected entities responsible for verifying storage clients and allocating datacap tokens.
Verified Clients: Active participants with datacap allocations for their data storage needs.
To simplify data storage on the Filecoin network, several tools offer streamlined integration of Filecoin and IPFS storage for applications or smart contracts.
These storage helpers provide libraries that abstract the Filecoin deal-making process into simple API calls. They also store data on IPFS for efficient and fast content retrieval.
Available storage helpers include:
lighthouse.storage: An SDK for builders, providing tools for storing data from dApps.
web3.storage: A user-friendly client for accessing decentralized protocols like IPFS and UCAN.
Akave: A modular L2 solution for decentralized data management, combining Filecoin storage with encryption and easy-to-use interfaces.
Storacha: A decentralized hot storage network for scalable, user-owned data with decentralized permissions, leveraging Filecoin.
Curio: A next-gen platform within the Filecoin ecosystem, streamlining storage provider operations.
boost.filecoin.io: A tool for storage providers to manage data onboarding and retrieval on the Filecoin network.
Crypto-economics is the study of how cryptocurrency can incentivize usage of a blockchain network. This page covers how Filecoin manages incentivization within the network.
Filecoin’s native currency, FIL, is a utility token that incentivizes persistent storage on the Filecoin network. Storage providers earn FIL by offering reliable storage services or committing storage capacity to the network. With a maximum circulating supply of 2 billion FIL, no more than 2 billion Filecoin will ever exist.
As a utility token aligned with the network’s long-term growth, Filecoin issuance depends on the network’s provable utility and growth. Most of the Filecoin supply is only minted as the network achieves specific growth and utility milestones.
Filecoin uses a dual minting model for block reward distribution:
Up to 770 million FIL tokens are minted based on network performance. Full release of these tokens would only occur if the Filecoin network reaches a yottabyte of storage capacity within 20 years, approximately 1,000 times the capacity of today’s cloud storage.
An additional 330 million FIL tokens are released on a 6-year half-life schedule, with 97% of these tokens projected to be released over about 30 years.
Additionally, 300 million FIL tokens are held in a mining reserve to incentivize future mining models.
Mining rewards are subject to a vesting schedule to support long-term network alignment. For instance, 75% of block rewards earned by miners vest linearly over 180 days, while 25% are immediately accessible, improving miner cash flow and profitability. Further, FIL tokens are vested to Protocol Labs teams and the Filecoin Foundation over six years and to SAFT investors over three years, as outlined in the vesting schedule.
To ensure network security and reliable storage, storage providers must lock FIL as pledge collateral during block reward mining. Pledge collateral is based on projected block rewards a miner could earn. Collateral and all earned rewards are subject to slashing if the storage fails to meet reliability standards throughout a sector’s lifecycle.
FIL’s maximum circulating supply is capped at 2 billion FIL. However, this maximum will never be reached, as a portion of FIL is permanently removed from circulation through gas fees, penalties, and other mechanisms.
This section offers a detailed overview of Filecoin for developers, serving as a go-to reference for their needs.
Filecoin is a peer-to-peer network that enables reliable, decentralized file storage through built-in economic incentives and cryptographic proofs. Users pay storage providers—computers that store and continuously prove file integrity—to securely store their files over time. Anyone can join Filecoin as a user seeking storage or as a provider offering storage services. Storage availability and pricing aren’t controlled by any single entity; instead, Filecoin fosters an open market for file storage and retrieval accessible to all.
Filecoin is built on the same technology as the IPFS protocol. IPFS is a distributed storage network that uses content addressing to provide permanent data references without dependency on specific devices or cloud providers. Filecoin differs from IPFS by introducing an incentive layer that promotes reliable storage and consistent access to data.
Filecoin’s use cases are diverse, ranging from Web3-native NFT storage to metaverse and gaming assets, as well as incentivized, permanent storage. It also offers a cost-effective solution for archiving traditional Web2 datasets, making it a strong alternative to conventional cloud storage.
For instance, NFT.Storage leverages Filecoin for decentralized NFT content and metadata storage. Likewise, organizations like the Shoah Foundation and the Internet Archive use Filecoin for content preservation and backup.
Filecoin is compatible with various data types, including audio and video files. This versatility allows Web3 platforms like Audius and Huddle01 to use Filecoin as a decentralized storage backend for music streaming and video conferencing.
Filecoin is a decentralized, peer-to-peer network enabling anyone to store and retrieve data over the internet. Economic incentives are built in, ensuring files are stored and accessible reliably over
Actors are smart contracts that run on the Filecoin virtual machine (FVM) and are used to manage, query, and update the state of the Filecoin network. Smart contracts are small, self-executing blocks.
Built-in actors are how the Filecoin network manages and updates global state. The global state of the network at a given epoch can be thought of as the set of blocks agreed upon via network consensus in that epoch. This global state is represented as a state tree, which maps an actor to an actor state. An actor state describes the current conditions for an individual actor, such as its FIL balance and its nonce. In Filecoin, actors trigger a state transition by sending a message. Each block in the chain can be thought of as a proposed global state, where the block selected by network consensus sets the new global state. Each block contains a series of messages and a checkpoint of the current global state after the application of those messages. The Filecoin Virtual Machine (FVM) is the Filecoin network component that is in charge of the execution of all actor code.
A basic example of how actors are used in Filecoin is the process by which storage providers prove storage and are subsequently rewarded. The process is as follows:
The storage provider is awarded storage power based on whether the proof is valid or not.
Each block in the Filecoin chain contains the following:
Inline data such as current block height.
A pointer to the current state tree.
A pointer to the set of messages that, when applied to the network, generated the current state tree.
Actors, like FIL balance, nonce, and a pointer (CID) to actor state data.
Messages in the current block
Like the state tree, a Merkle Directed Acyclic Graph (Merkle DAG) is used to map the set of messages for a given block. Nodes in the messages may contain information on:
The actor the message was sent to
The actor that sent the message
Target method to call on the actor being sent the message
A cryptographic signature for verification
The amount of FIL transferred between actors
The code that defines an actor in the Filecoin network is separated into different methods. Messages sent to an actor contain information on which method(s) to call and the input parameters for those methods. Additionally, actor code interacts with a runtime object, which contains information on the general state of the network, such as the current epoch, cryptographic signatures, and proof validations. Like smart contracts in other blockchains, actors must pay a gas fee, which is some predetermined amount of FIL to offset the cost (network resources used, etc.) of a transaction. Every actor has a Filecoin balance attributed to it, a state pointer, a code that tells the system what type of actor it is, and a nonce, which tracks the number of messages sent by this actor.
The 11 different types of built-in actors are as follows:
The CronActor
sends messages to the StoragePowerActor
and StorageMarketActor
at the end of each epoch. The messages sent by CronActor
indicate to StoragePowerActor and StorageMarketActor how they should maintain the internal state and process deferred events. This system actor is instantiated in the genesis block and interacts directly with the FVM.
The InitActor
can initialize new actors on the Filecoin network. This system actor is instantiated in the genesis block and maintains a table resolving a public key and temporary actor addresses to their canonical ID addresses. The InitActor
interacts directly with the FVM.
The AccountActor
is responsible for user accounts. Account actors are not created by the InitActor
but by sending a message to a public-key style address. The account actor updates the state tree with a new actor address and interacts directly with the FVM.
The RewardActor
manages unminted Filecoin tokens and distributes rewards directly to miner actors, where they are locked for vesting. The reward value used for the current epoch is updated at the end of an epoch. The RewardActor
interacts directly with the FVM.
The StorageMarketActor
is responsible for processing and managing on-chain deals. This is also the entry point of all storage deals and data into the system. This actor keeps track of storage deals and the locked balances of both the client storing data and the storage provider. When a deal is posted on-chain through the StorageMarketActor
, the actor will first check if both transacting parties have sufficient balances locked up and include the deal on-chain. Additionally, the StorageMarketActor
holds Storage Deal Collateral provided by the storage provider to collateralize deals. This collateral is returned to the storage provider when all deals in the sector successfully conclude. This actor does not interact directly with the FVM.
The StorageMinerActor
is created by the StoragePowerActor
and is responsible for storage mining operations and the collection of mining proofs. This actor is a key part of the Filecoin storage mining subsystem, which ensures a storage miner can effectively commit storage to Filecoin and handles the following:
Committing new storage
Continuously proving storage
Declaring storage faults
Recovering from storage faults
This actor does not interact directly with the FVM.
The MultisigActor
is responsible for dealing with operations involving the Filecoin wallet and represents a group of transaction signers with a maximum of 256. Signers may be external users or the MultisigActor
itself. This actor does not interact directly with the FVM.
The PaymentChannelActor
creates and manages payment channels, a mechanism for off-chain microtransactions for Filecoin dApps to be reconciled on-chain at a later time with less overhead than a standard on-chain transaction and no gas costs. Payment channels are uni-directional and can be funded by adding to their balance. To create a payment channel and deposit fund, a user calls the PaymentChannelActor
. This actor does not interact directly with the FVM.
The StoragePowerActor
is responsible for keeping track of the storage power allocated to each storage miner and has the ability to create a StorageMinerActor
. This actor does not interact directly with the FVM.
The VerifiedRegistryActor
is responsible for managing Filecoin Plus clients. This actor can add a verified client to the Filecoin Plus program, remove and reclaim expired DataCap allocations, and manage claims. This actor does not interact directly with the FVM.
A user actor is code defined by any developer that can interact with the FVM, otherwise known as a smart contract.
With the FVM, actors can be written in Solidity. In future updates, any language that compiles to WASM will be supported. With user actors, users can create and enforce custom rules for storing and accessing data on the network. The FVM is responsible for actors and ensuring that they are executed correctly and securely.
The retrieval market facilitates the negotiation of retrieval deals for serving stored data to clients in exchange for FIL.
Currently, Filecoin nodes support direct retrieval from the storage miners who originally stored the data. Clients can send retrieval requests directly to a storage provider and pay a small amount of FIL to retrieve their data.
To request data retrieval, clients need to provide the following information to the storage provider:
Storage Provider ID: The ID of the storage provider where the data is stored.
Payload CID: Also known as Data CID.
Address: The address initially used to create the storage deal.
A blockchain is a distributed database shared among nodes in a computer network. This page covers the design and functions of the Filecoin blockchain.
The Filecoin blockchain consists of a chain of tipsets rather than individual blocks. A tipset is a set of blocks with the same height and parent tipset, allowing multiple storage providers to produce blocks in each epoch to increase network throughput.
Each tipset is assigned a weight, enabling the consensus protocol to guide nodes to build on the heaviest chain. This adds a level of security to the Filecoin network by preventing interference from nodes attempting to produce invalid blocks.
An actor in the Filecoin blockchain is similar to a smart contract in the Ethereum Virtual Machine. It functions as an ‘object’ within the Filecoin network, with a state and a set of methods for interaction.
Several built-in system actors power the Filecoin network as a decentralized storage network:
System actor: General system actor.
Init actor: Initializes new actors and records the network name.
Cron actor: Scheduler that runs critical functions at every epoch.
Account actor: Manages user accounts (non-singleton).
Reward actor: Manages block rewards and token vesting (singleton).
Storage miner actor: Manages storage mining operations and validates storage proofs.
Storage power actor: Tracks storage power allocation for each provider.
Storage market actor: Manages storage deals.
Multisig actor: Handles Filecoin multi-signature wallet operations.
Payment channel actor: Sets up and settles payment channel funds.
Datacap actor: Manages datacap tokens.
Verified registry actor: Manages verified clients.
Ethereum Address Manager (EAM) actor: Assigns Ethereum-compatible addresses on Filecoin, including EVM smart contract addresses.
Ethereum Virtual Machine (EVM) account actor: Represents an external Ethereum identity backed by a secp256k1 key.
With the maturity of the FVM, developers can write actors and deploy them on the Filecoin network, similar to other blockchains' smart contracts. User-programmable actors can interact with built-in actors via the exported API from built-in actors.
Filecoin nodes are categorized by the services they provide to the storage network, including chain verifier nodes, client nodes, storage provider nodes, and retrieval provider nodes. All participating nodes must provide chain verification services.
Filecoin supports multiple protocol implementations to enhance security and resilience. Active implementations include:
In the Filecoin network, addresses identify actors in the Filecoin state. Each address encodes information about the corresponding actor, making it easy to use and resistant to errors. Filecoin has five address types. Mainnet addresses start with f
, and Testnet addresses start with t
.
f0/t0
: ID address for an actor in a human-readable format, such as f0123261
for a storage provider.
f1/t1
: secp256k1 wallet address, generated from an encrypted secp256k1 public key.
f2/t2
: Address assigned to an actor (smart contract) in a way that ensures stability across network forks.
f3/t3
: BLS wallet address, generated from a BLS public key.
f4/t4
: Address created and assigned to user-defined actors by customizable "address management" actors. This address can receive funds before an actor is deployed.
f410/t410
: Address space managed by the Ethereum Address Manager (EAM) actor, allowing Ethereum-compatible addresses to interact seamlessly with the Filecoin network. Ethereum addresses can be cast as f410/t410
addresses and vice versa, enabling compatibility with existing Ethereum tools.
Expected Consensus (EC) is the consensus algorithm underlying Filecoin. EC is a probabilistic, Byzantine fault-tolerant protocol that conducts a leader election among storage providers each epoch to determine which provider submits a block. Similar to proof-of-stake, Filecoin’s leader election relies on proof-of-storage, meaning the probability of being elected depends on how much provable storage power a miner contributes to the network. This storage power is recorded in the storage power table, managed by the Storage Power Actor.
The block production process for each epoch is as follows:
Elect leaders from eligible miners.
Miners check if they are elected.
Elected miners generate WinningPoSt using randomness.
Miners build and propagate a block.
Verify the winning miner and election.
Select the heaviest chain to add blocks.
EC enforces soft finality, where miners at round N
reject blocks forking off before round N - F
(where F
is set to 900
). This ensures finality without compromising chain availability.
Filecoin operates on proof-of-storage, where miners offer storage space and provide proofs to verify data storage.
With proof-of-replication (PoRep), storage providers prove they have created a unique copy of the client’s data for the network.
Storage providers must continuously prove that they are storing clients' data throughout the entire duration of the storage deal. The proof-of-spacetime (PoSt) process includes two types of challenges:
Winning PoSt: Verifies that a storage provider holds a copy of the data at a specific point in time.
Window PoSt: Confirms that the data has been consistently stored over a defined period.
If storage providers fail to maintain reliable uptime or act maliciously, they face penalties through a process called slashing. Filecoin enforces two types of slashing:
Storage Fault Slashing: Penalizes providers who fail to maintain healthy and reliable storage sectors.
Consensus Fault Slashing: Penalizes providers attempting to disrupt the security or availability of the consensus process.
This section covers the basic concepts surrounding the Filecoin blockchain.
Once data is stored, computations can be performed directly on it without needing retrieval. This page covers the basics of programming on Filecoin.
For example, Bacalhau provides a platform for public, transparent, and verifiable distributed computation, allowing users to run Docker containers and WebAssembly (Wasm) images as tasks on data stored in InterPlanetary File System (IPFS).
Filecoin is uniquely positioned to support large-scale off-chain computation because storage providers have compute resources, such as GPUs and CPUs, colocated with their data. This setup enables a new paradigm where computations occur directly on the data where it resides, reducing the need to move data to external compute nodes.
The Filecoin Virtual Machine (FVM) is a runtime environment for executing smart contracts on the Filecoin network. These smart contracts allow users to run bounded computations and establish rules for storing and accessing data. The FVM ensures that these contracts are executed securely and reliably.
Initially, the FVM supports smart contracts written in Solidity, with plans to expand to other languages that compile to Wasm, as outlined in the FVM roadmap.
By enabling compute-over-states on the Filecoin network, the FVM unlocks a wide range of potential use cases. Examples include:
FVM enables a new kind of organization centered around data.
The FVM makes it possible to create and manage decentralized and autonomous organizations (Data DAOs) focused on data curation and preservation. Data DAOs allow groups of individuals or organizations to govern and monetize data access, pooling returns into a shared treasury to fund preservation and growth. These data tokens can also be exchanged among peers or used to request computation services, such as validation, analysis, feature detection, and machine learning.
The FVM allows users to store data once and use repair and replication bots to manage ongoing storage deals, ensuring perpetual data storage. Through smart contracts, users can fund a wallet with FIL, allowing storage providers to maintain data storage indefinitely. Repair bots monitor these storage deals and replicate data across providers as needed, offering long-term data permanence.
The FVM can facilitate unique financial services tailored for storage providers (SPs) in the Filecoin ecosystem.
Users can lend Filecoin to storage providers to be used as storage collateral, earning interest in return. Loans may be undercollateralized based on SP performance history, with reputation scores generated from on-chain data. Loans can also be automatically repaid to investors using a multisig wallet, which includes lenders and a third-party arbitrator. New FVM-enabled smart contracts create yield opportunities for FIL holders while supporting the growth of storage services on the network.
SPs may require financial products to protect against risks in providing storage solutions. Attributes such as payment history, operational length, and availability can be used to underwrite insurance policies, shielding SPs from financial impacts due to storage faults or token price fluctuations.
The FVM is expected to achieve feature parity with other persistent EVM chains, supporting critical infrastructure for decentralized exchanges and token bridges.
To facilitate on-chain token exchange, the FVM may support decentralized exchanges like Uniswap or Sushi, or implement decentralized order books similar to Serum on Solana.
Although not an immediate focus, token bridges will eventually connect Filecoin to EVM, Move, and Cosmos chains, enabling cross-chain wrapped tokens. While Filecoin currently offers unique value without needing to bootstrap liquidity from other chains, long-term integration with other blockchains is anticipated.
If you are interested in building these use cases, the following solution blueprints may be helpful:
Since Filecoin nodes support the Ethereum JSON-RPC API, FEVM is compatible with existing EVM development tools, such as Hardhat, Brownie, and MetaMask. Most smart contracts deployed to Filecoin require minimal adjustments, if any. For example, new ERC-20 tokens can be launched on Filecoin or bridged to other chains.
Developers can choose between deploying actors on the FEVM or native FVM: for optimal performance, actors should be written in languages that compile to Wasm and deployed to the native FVM. For familiarity with Solidity and EVM tools, the FEVM is a convenient alternative.
In summary, the FEVM provides a straightforward path for Web3 developers to begin building on Filecoin using familiar tools and languages, while gaining native access to Filecoin storage deals.
The primary difference between FEVM and EVM contracts is that FEVM contracts can interact directly with Filecoin-specific actors, such as miner actors, which are inaccessible to Ethereum contracts. To enable seamless integration, a Filecoin-Solidity API library has been developed to facilitate interactions with Filecoin-specific actors and syscalls.
For those familiar with the Ethereum virtual machine (EVM), actors work similarly to . In the Filecoin network, there are two types of actors:
: Hardcoded programs written ahead of time by network engineers that manage and orchestrate key subprocesses and subsystems in the Filecoin network.
: Code implemented by any developer that interacts with the Filecoin Virtual Machine (FVM).
The processes proof of storage from a storage provider.
The accounts for the storage power.
During block validation, the state, which includes information on storage power allocated to each storage provider, is read.
Using the state information, the consensus mechanism randomly awards blocks to the storage providers with the most power, and the sends FIL to storage providers.
A is used to map the state tree and the set of messages. Nodes in the state tree contain information on:
For more information on SystemActor
, see the .
A smart contract is a small, self-executing block of custom code that runs on other blockchains, like Ethereum. In the Filecoin network, the term is a synonym for . You may see the term smart contract used in tandem with user actor, but there is no difference between the two.
is a Web3 CDN within Filecoin’s retrieval market that serves data stored on Filecoin with low latency and at a low cost. It consists of independent retrieval providers dedicated to efficient, fast, and reliable data retrieval operations.
Filecoin uses the protocol as a randomness beacon for leader election in the process. This randomness ensures leader election is secret, fair, and verifiable.
At a high level, the consensus process uses to provide distributed, verifiable randomness, ensuring that leader election is secret, fair, and unbiased. Election participants and their storage power are drawn from the Power Table, which is continuously calculated and maintained by the Storage Power Consensus subsystem. Ultimately, EC gathers all valid blocks produced in an epoch and applies a weighting function to select the heaviest chain, adding blocks accordingly.
Beyond storage and retrieval, data often needs transformation. Compute-over-data protocols enable computations over IPLD, the data layer used by content-addressed systems like Filecoin. Working groups are developing compute solutions for Filecoin data, including large-scale parallel compute (e.g., ) and cryptographically verifiable compute (e.g., ).
The FVM is designed to support both native Filecoin actors written in languages that compile to Wasm and smart contracts from other runtimes, such as Solidity for the Ethereum Virtual Machine (EVM), Secure EcmaScript (SES), and eBPF. The and SDK are written in Rust, ensuring high performance and security.
In addition to these, the FVM could support various other use cases, such as data access control (), trustless reputation systems, replication workers, storage bounties, and L2 networks. For more details on potential use cases, see our post.
The Filecoin EVM (FEVM) is an Ethereum Virtual Machine (EVM) runtime built on top of the FVM. It allows developers to port existing EVM-based smart contracts directly onto Filecoin. The FEVM emulates EVM bytecode at a low level, supporting contracts written in Solidity, Vyper, and Yul. The EVM runtime is based on open-source libraries, including and Revm. More details can be found in the .
For example FEVM contracts, see the available .
💡 Learn the basics
New to Filecoin and looking for foundational concepts? Start with the Basics section to understand the essentials and kick off your journey!
🔧 Build with Filecoin
Ready to develop on the Filecoin network? Head to the Developers section for guides and examples to help bring your project to life.
🏗️ Become a Storage Provider
Thinking about running a provider node on Filecoin? Visit the Provider section for comprehensive guidance on getting started.
📊 Store data
Looking to store large volumes of data? Explore the Store section to review the various storage options Filecoin offers.
The Filecoin network has several networks for testing, staging, and production purposes. This page provides information on available networks.
Mainnet is the live production network that connects all nodes on the Filecoin network. It operates continuously without resets.
Test networks, or testnets, are versions of the Filecoin network that simulate various aspects of the mainnet. They are intended for testing and should not be used for production applications or services.
The Calibration testnet offers the closest simulation of the mainnet. It provides realistic sealing performance and hardware requirements due to the use of finalized proofs and parameters, allowing prospective storage providers to test their setups. Storage clients can also store and retrieve real data on this network, participating in deal-making workflows and testing storage/retrieval functionalities. Calibration testnet uses the same sector size as the mainnet.
The section covers the assets you can find on the Filecoin network, along with how to securely manage and use them.
Like many other blockchains, blocks are a fundamental concept in Filecoin. Unlike other blockchains, Filecoin is a chain of groups of blocks called tipsets rather than a chain of individual blocks.
In Filecoin, a block consists of:
A block header
A list of messages contained in the block
A signed copy of each message listed
Every block refers to at least one parent block; that is, a block produced in a prior epoch.
A message represents communication between two actors and thus changes in network state. The messages are listed in their order of appearance, deduplicated, and returned in canonical order of execution. So, in other words, a block describes all changes to the network state in a given epoch.
Blocktime is a concept that represents the average time it takes to mine or produce a new block on a blockchain. In Ethereum, for example, the blocktime is approximately 15 seconds on average, meaning that a new block is added to the Ethereum blockchain roughly every 15 seconds.
In the Filecoin network, storage providers compete to produce blocks by providing storage capacity and participating in the consensus protocol. The block time determines how frequently new blocks are added to the blockchain, which impacts the overall speed and responsiveness of the network.
Filecoin has a block time of 30 seconds, and this duration was chosen for two main reasons:
Hardware requirements: If the block time were faster while maintaining the same gas limit or the number of messages per block, it would lead to increased hardware requirements. This includes the need for more storage space to accommodate the larger chain data resulting from more frequent block production.
Storage provider operations: The block time also takes into account the various operations that occur during that duration on the storage provider (SP) side. As SPs generate new blocks, the 30-second block time allows for the necessary processes and computations to be carried out effectively. If the blocktime were shorter, SPs would encounter significantly more blocktime failures.
By considering these factors, the Filecoin network has established a block time of 30 seconds, balancing the need for efficient operations and hardware requirements.
As described in Consensus, multiple potential block producers may be elected via Expected Consensus (EC) to create a block in each epoch, which means that more than one valid block may be produced in a given epoch. All valid blocks with the same height and same parent block are assembled into a group called a tipset.
In other blockchains, blocks are used as the fundamental representation of network state, that is, the overall status of each participant in the network at a given time. However, this structure has the following disadvantages:
Potential block producers may be hobbled by network latency.
Not all valid work is rewarded.
Decentralization and collaboration in block production are not incentivized.
Because Filecoin is a chain of tipsets rather than individual blocks, the network enjoys the following benefits:
All valid blocks generated in a given round are used to determine network state, increasing network efficiency and throughput.
All valid work is rewarded (that is, all validated block producers in an epoch receive a block reward).
All potential block producers are incentivized to produce blocks, disincentivizing centralization and promoting collaboration.
Because all blocks in a tipset have the same height and parent, Filecoin is able to achieve rapid convergence in the case of forks.
In summary, blocks, which contain actor messages, are grouped into tipsets in each epoch, which can be thought of as the overall description of the network state for a given epoch.
Wherever you see the term block in the Ethereum JSON-RPC, you should mentally read tipset. Before the inclusion of the Filecoin EVM runtime, there was no single hash referring to a tipset. A tipset ID was the concatenation of block CIDs, which led to a variable-length ID and poor user experience.
With the Ethereum JSON-RPC, we introduced the concept of the tipset CID for the first time. It is calculated by hashing the former tipset key using a Blake-256 hash. Therefore, when you see the term:
block hash, think tipset hash.
block height, think tipset epoch.
block messages, think messages in all blocks in a tipset, in their order of appearance, deduplicated and returned in canonical order of execution.
Drand, pronounced dee-rand, is a distributed randomness beacon daemon written in Golang.
This page covers how Drand is used within the Filecoin network. For more information on Drand generally, take a look at the project’s documentation.
By polling the appropriate endpoint, a Filecoin node will get back a Drand value formatted as follows:
signature
: the threshold BLS signature on the previous signature value and the current round number round.
previous_signature
: the threshold BLS signature from the previous Drand round.
round
: the index of randomness in the sequence of all random values produced by this Drand network.
The message signed is the concatenation of the round number treated as a uint64 and the previous signature. At the moment, Drand uses BLS signatures on the BLS12-381 curve with the latest v7 RFC of hash-to-curve, and the signature is made over G1.
Filecoin nodes fetch the Drand entry from the distribution network of the selected Drand network.
Drand distributes randomness using multiple distribution channels such as HTTP servers, S3 buckets, gossiping, etc. Simply put, the Drand nodes themselves will not be directly accessible by consumers; rather, highly-available relays will be set up to serve Drand values over these distribution channels.
On initialization, Filecoin initializes a Drand client with chain info that contains the following information:
Period: the period of time between each Drand randomness generation.
GenesisTime: at which the first round in the Drand randomness chain is created.
PublicKey: the public key to verify randomness.
GenesisSeed: the seed that has been used for creating the first randomness.
It is possible to simply store the hash of this chain info and to retrieve the contents from the Drand distribution network as well on the /info
endpoint.
Thereafter, the Filecoin client can call Drand’s endpoints:
/public/latest
to get the latest randomness value produced by the beacon.
/public/<round>
to get the randomness value produced by the beacon at a given round.
Drand is used as a randomness beacon for leader election in Filecoin. While Drand returns multiple values with every call to the beacon (see above), Filecoin blocks need only store a subset of these in order to track a full Drand chain. This information can then be mixed with on-chain data for use in Filecoin.
Any Drand beacon outage will effectively halt Filecoin block production. Given that new randomness is not produced, Filecoin miners cannot generate new blocks. Specifically, any call to the Drand network for a new randomness entry during an outage should be blocked in Filecoin.
After a beacon downtime, Drand nodes will work to quickly catch up to the current round. In this way, the above time-to-round mapping in Drand used by Filecoin remains invariant after this catch-up following downtime.
While Filecoin miners were not able to mine during the Drand outage, they will quickly be able to run leader election thereafter, given a rapid production of Drand values. We call this a catch-up period.
During the catch-up period, Filecoin nodes will backdate their blocks in order to continue using the same time-to-round mapping to determine which Drand round should be integrated according to the time. Miners can then choose to publish their null blocks for the outage period, including the appropriate Drand entries throughout the blocks, per the time-to-round mapping. Or, as is more likely, try to craft valid blocks that might have been created during the outage.
Based on the level of decentralization of the Filecoin network, we expect to see varying levels of miner collaboration during this period. This is because there are two incentives at play: trying to mine valid blocks during the outage to collect block rewards and not falling behind a heavier chain being mined by a majority of miners who may or may not have ignored a portion of these blocks.
In any event, a heavier chain will emerge after the catch-up period and mining can resume as normal.
In the Filecoin blockchain, network consensus is achieved using the Expected Consensus (EC) algorithm, a secret, fair, and verifiable consensus protocol used by the network to agree on the chain state
In the Filecoin blockchain, network consensus is achieved using the Expected Consensus (EC) algorithm, a probabilistic, Byzantine fault-tolerant consensus protocol. At a high level, EC achieves consensus by running a secret, fair, and verifiable leader election at every epoch where a set number of participants may become eligible to submit a block to the chain based on fair and verifiable criteria.
Expected Consensus (EC) has the following properties:
Each epoch has potentially multiple elected leaders who may propose a block.
A winner is selected randomly from a set of network participants weighted according to the respective storage power they contribute to the Filecoin network.
All blocks proposed are grouped together in a tipset, from which the final chain is selected.
A block producer can be verified by any participant in the network.
The identity of a block producer is anonymous until they release their block to the network.
In summary, EC involves the following steps at each epoch:
A storage provider checks to see if they are elected to propose a block by generating an election proof.
Zero, one, or multiple storage providers may be elected to propose a block. This does not mean that an elected participant is guaranteed to be able to submit a block. In the case where:
No storage providers are elected to propose a block in a given epoch; a new election is run in the next epoch to ensure that the network remains live.
One or more storage providers are elected to propose a block in a given epoch; each must generate a WinningPoSt proof-of-storage to be eligible to actually submit a block.
Each potential block producer elected generates a storage proof using WinningPoSt for a randomly selected sector within in short window of time. Potential block producers that fail this step are not eligible to produce a block. In this step, the following could occur:
All potential block producers fail WinningPoSt, in which case EC returns to step 1 (described above).
One or more potential block producers pass WinningPoSt, which means they are eligible to submit that block to the epochs tipset.
Blocks generated by block producers are grouped into a tipset.
The tipset that reflects the biggest amount of committed storage on the network is selected.
Using the selected tipset, the chain state is propagated.
EC returns to step 1 in the next epoch.
In Filecoin cryptographic proving systems, often simply referred to as proofs, are used to validate that a storage provider (SP) is properly storing data.
Different blockchains use different cryptographic proving systems (proofs) based on the network’s specific purpose, goals, and functionality. Regardless of which method is used, proofs have the following in common:
All blockchain networks seek to achieve consensus and rely on proofs as part of this process.
Proofs incentivize network participants to behave in certain ways and allow the network to penalize participants who do not abide by network standards.
Proofs allow decentralized systems to agree on a network state without a central authority.
Proof-of-Work and Proof-of-Stake are both fairly common proof methods:
Proof-of-Work: nodes in the network solve complex mathematical problems to validate transactions and create new blocks,
Proof-of-Stake: nodes in the network are chosen to validate transactions and create new blocks based on the amount of cryptocurrency they hold and “stake” in the network.
The Filecoin network aims to provide useful, reliable storage to its participants. With a traditional centralized entity like a cloud storage provider, explicit trust is placed in the entity itself that the data will be stored in a way that meets some minimum set of standards such as security, scalability, retrievability, or replication. Because the Filecoin network is a decentralized network of storage providers (SPs) distributed across the globe, network participants need an automated, trustless, and decentralized way to validate that an SP is doing a good job of handling the data.
In particular, the Filecoin proof process must verify the data was properly stored at the time of the initial request and is continuing to be stored based on the terms of the agreement between the client and the SP. In order for the proof processes to be robust, the process must:
Target a random part of the data.
Occur at a time interval such that it is not possible, profitable, or rational for an SP to discard and re-fetch the copy of data.
In Filecoin, this process is known as Proof-of-Storage, and consists of two distinct types of proofs:
Proof of Replication (PoRep): a procedure used at the time of initial data storage to validate that an SP has created and stored a unique copy of some piece of data.
Proof of Spacetime (PoST): a procedure to validate that an SP is continuing to store a unique copy of some piece of data.
In the Filecoin storage lifecycle process, Proof-of-Replication (PoRep) is used when an SP agrees to store data on behalf of a client and receives a piece of client data. In this process:
The data is placed into a sector.
The sector is sealed by the SP.
A unique encoding, which serves as proof that the SP has replicated a copy of the data they agreed to store, is generated (described in Sealing as proof).
The proof is compressed.
The result of the compression is submitted to the network as certification of storage.
The unique encoding created during the sealing process is generated using the following pieces of information:
The data is sealed.
The storage provider who seals the data.
The time at which the data was sealed.
Because of the principles of cryptographic hashing, a new encoding will be generated if the data changes, the storage provider sealing the data changes, or the time of sealing changes. This encoding is unique and can be used to verify that a specific storage provider did, in fact, store a particular piece of client data at a specific time.
After a storage provider has proved that they have replicated a copy of the data that they agreed to store, the SP must continue to prove to the network that:
They are still storing the requested data.
The data is available.
The data is still sealed.
Because this method is concerned with proving that data is being stored in a particular space for a particular period or at a particular time, it is called Proof-of-Spacetime (PoSt). In Filecoin, the PoSt process is handled using two different sub-methods, each of which serves a different purpose:
WinningPoSt is used to prove that an SP selected using an election process has a replica of the data at the specific time that they were asked and is used in the block consensus process.
WindowPoSt is used to prove that, for any and all SPs in the network, a copy of the data that was agreed to be stored is being continuously maintained over time and is used to audit SPs continuously.
WinningPoSt is used to prove that an SP selected via election has a replica of the data at the specific time that they were asked and is specifically used in Filecoin to determine which SPs may add blocks to the Filecoin blockchain.
At the beginning of each epoch, a small number of SPs are elected to mine new blocks using the Expected Consensus algorithm, which guarantees that validators will be chosen based on a probability proportional to their power. Each of the SPs selected must submit a WinningPoSt, proof that they have a sealed copy of the data that they have included in their proposed block. The deadline to submit this proof is the end of the current epoch and was intentionally designed to be short, making it impossible for the SP to fabricate the proof. Successful submission grants the SP:
The block reward .
The opportunity to charge other nodes fees in order to include their messages in the block.
If an SP misses the submission deadline, no penalty is incurred, but the SP misses the opportunity to mine a block and receive the block reward.
WindowPoSt is used to prove that, for any and all SPs in the network, a copy of the data that was agreed to be stored is being continuously maintained over time and is used to audit SPs continuously. In WindowPoSt, all SPs must demonstrate the availability of all sectors claimed every proving period. Sector availability is not proved individually; rather, SPs must prove a whole partition at once, and that sector must be proved by the deadline assigned (a 30-minute interval in the proving period).
The more sectors an SP has pledged to store, the more the partitions of sectors that the SP will need to prove per deadline. As this requires that the SP has access to sealed copies of each of the requested sectors, it makes it irrational for the SP to seal data every time they need to provide a WindowPoSt proof, thus ensuring that SPs on the network are continuously maintaining the data agreed to. Additionally, failure to submit WindowPoSt for a sector will result in the SPs’ pledge collateral being forfeited and their storage power being reduced.
FIL is the cryptocurrency that powers the Filecoin network. This page explains what FIL is, how it can be used, and its denominations.
FIL plays a vital role in incentivizing users to participate in the Filecoin network and ensuring its smooth operation. Here are some ways in which FIL is used on the Filecoin network:
When a user wants to store data on the Filecoin network, they pay in FIL to the storage providers who offer their storage space. The payment is made in advance, for a certain amount of time that the data will be stored on the network.
In addition, storage providers choose their own terms and payment mechanisms for providing storage and retrieval services so other options (such as fiat payments) can be available.
Storage providers are also rewarded with FIL for providing their storage space and performing other useful tasks on the network. FIL is used to reward storage providers who validate and add new blocks to the Filecoin blockchain. Providers receive a block reward in FIL for each new block they add to the blockchain and also earn transaction fees in FIL for processing storage and retrieval transactions.
As members of the Filecoin community, FIL holders are encouraged to participate in the Filecoin governance process. They can do so by proposing, deliberating, designing, and/or contributing to consensus for network changes, alongside other stakeholders in the Filecoin community- including implementers, Core Devs, storage providers, and other ecosystem partners. Learn more about the Filecoin Governance process.
FIL, NanoFIL, and PicoFIL are all denominated in the same cryptocurrency unit, but they represent different levels of precision and granularity. For most users, FIL is the main unit of measurement and is used for most transactions and payments on the Filecoin network.
Much like how a US penny represents a fraction of a US dollar, there are many ways to represent value using Filecoin. This is because some actions on the Filecoin network require substantially less value than one whole FIL
. The different denominations of FIL
you may see referenced across the ecosystem are:
FIL
1
milliFIL
1,000
microFIL
1,000,000
nanoFIL
1,000,000,000
picoFIL
1,000,000,000,000
femtoFIL
1,000,000,000,000,000
attoFIL
1,000,000,000,000,000,000
The most common way to get FIL is to use an exchange. You should be aware of some specific steps when trying to transfer FIL from an exchange to your wallet.
A cryptocurrency exchange is a digital platform where users can buy, sell, and trade cryptocurrencies for other cryptocurrencies or traditional fiat currencies like USD, EUR, or JPY.
Cryptocurrency exchanges provide a marketplace for users to trade their digital assets and are typically run by private companies that facilitate these transactions. These exchanges can differ in terms of fees, security protocols, and the variety of cryptocurrencies they support.
Users can typically sign up for an account with a cryptocurrency exchange, deposit funds into their account, and then use those funds to purchase or sell cryptocurrencies at the current market price. Some exchanges offer advanced trading features like margin trading, stop-loss orders, and trading bots.
It's important to note that while cryptocurrency exchanges can offer convenience and liquidity for traders, they also come with risks like hacking and regulatory uncertainty. Therefore, users should take precautions to protect their funds and do their own research before using any particular exchange.
There are many exchanges that allow users to buy, sell, and trade FIL. Websites like coingecko.com and coinmarketcap.com keep track of which exchanges support which cryptocurrencies. You can use these lists to help you decide which exchange to use.
Once you have found an exchange you want to use, you will have to create an account with that exchange. Many exchanges have strict verification and Know-Your-Customer (KYC) processes in place, so it may take a few days to create your account. However, most large exchanges can verify your information in a few minutes.
Purchasing cryptocurrency varies from exchange to exchange, but the process is usually something like this:
Add funds to your exchange account in your local currency (USD, EUR, YEN, etc.).
Exchange your local currency for FIL at a set price.
Some exchanges allow users to fund and withdraw FIL using any of the Filecoin address type. However, some exchanges only support one or a handful of the available address types. Most exchanges do not currently support f410 addresses.
If your exchange does not yet support Filecoin Eth-style 0x addresses, you must create a wallet to relay the funds through. Take a look at the Transfer FIL page for details on how to transfer your funds safely.
A fiat on-ramp is a service or platform that allows individuals to convert traditional fiat currencies such as the US dollar, Euro, or any other government-issued currency into cryptocurrencies. These on-ramps serve as entry points for people who want to start participating in the cryptocurrency ecosystem by purchasing digital currencies with their money but don't want to sign up with a cryptocurrency exchange.
FIL is supported by a number of fiat on-ramps, such as:
If you know of any other services that can be added to list this, raise an issue on GitHub.
Users are cautioned to do their own due diligence with respect to choosing a fiat on-ramp provider.
Crypto ATMs, also known as Bitcoin ATMs, are kiosks that allow individuals to buy and/or sell cryptocurrencies in exchange for fiat currency like the US dollar. They function similarly to traditional ATMs but are not connected to a bank account. Instead, they connect the user directly to a cryptocurrency exchange.
Using a Bitcoin ATM often comes with higher fees than online exchanges. Fees can vary, but they can range anywhere from 5% to 15% or even more per transaction.
If you’re looking to get FIL to test your applications on a testnet like Calibration, then check how to get test tokens! Test FIL is often referred to as tFIL
.
A Filecoin address is an identifier that refers to an actor in the Filecoin state. All actors (miner actors, the storage market actor, account actors) have an address.
All Filecoin addresses begin with an f
to indicate the network (Filecoin), followed by any of the address prefix numbers (0
, 1
, 2
, 3
, 4
) to indicate the address type. There are five address types:
0
An ID address.
1
2
An actor address.
3
4
Extensible, user-defined actor addresses. f410
addresses refers to Ethereum-compatible address space, each f410
address is equivalent to an 0x
address.
Each of the address types is described below.
All actors have a short integer assigned to them by InitActor
, a unique actor that can create new actors. This integer that gets assigned is the ID of that actor. An ID address is an actor’s ID prefixed with the network identifier and the address type.
Actor ID addresses are not robust in the sense that they depend on chain state and are defined on-chain by the InitActor
. Additionally, actor IDs can change for a brief time after creation if the same ID is assigned to different actors on different forks. Actor ID addresses are similar to monotonically increasing numeric primary keys in a relational database. So, when a chain reorganization occurs (similar to a rollback in a SQL database), you can refer to the same ID for different rows. The expected consensus algorithm will resolve the conflict. Once the state that defines a new ID reaches finality, no changes can occur, and the ID is bound to that actor forever.
For example, the mainnet burn account ID address, f099
, is structured as follows:
ID addresses are often referred to by their shorthand f0
.
Actors managed directly by users, like accounts, are derived from a public-private key pair. If you have access to a private key, you can sign messages sent from that actor. The public key is used to derive an address for the actor. Public key addresses are referred to as robust addresses as they do not depend on the Filecoin chain state.
Public key addresses allow devices, like hardware wallets, to derive a valid Filecoin address for your account using just the public key. The device doesn’t need to ask a remote node what your ID address is. Public key addresses provide a concise, safe, human-readable way to reference actors before the chain state is final. ID addresses are used as a space-efficient way to identify actors in the Filecoin chain state, where every byte matters.
Filecoin supports two types of public key addresses:
secp256k1
addresses that begin with the prefix f1
.
BLS addresses that begin with the prefix f3
.
For BLS addresses, Filecoin uses curve bls12-381
for BLS signatures, which is a pair of two related curves, G1
and G2
.
Filecoin uses G1
for public keys, as G1 allows for a smaller representation of public keys and G2
for signatures. This implements the same design as ETH2 but contrasts with Zcash, which has signatures on G1
and public keys on G2
. However, unlike ETH2, which stores private keys in big-endian order, Filecoin stores and interprets private keys in little-endian order.
Public key addresses are often referred to by their shorthand, f1
or f3
.
Actor addresses provide a way to create robust addresses for actors not associated with a public key. They are generated by taking a sha256
hash of the output of the account creation. The ZH storage provider has the actor address f2plku564ddywnmb5b2ky7dhk4mb6uacsxuuev3pi
and the ID address f01248
.
Actor addresses are often referred to by their shorthand, f2
.
Filecoin supports extensible, user-defined actor addresses through the f4
address class, introduced in Filecoin Improvement Proposal (FIP) 0048. The f4
address class provides the following benefits to the network:
A predictable addressing scheme to support interactions with addresses that do not yet exist on-chain.
User-defined, custom addressing systems without extensive changes and network upgrades.
Support for native addressing schemes from foreign runtimes such as the EVM.
An f4
address is structured as f4<address-manager-actor-id>f<new-actor-id>
, where <address-manager-actor-id>
is the actor ID of the address manager, and <new-actor-id>
is the arbitrary actor ID chosen by that actor. An address manager is an actor that can create new actors and assign an f4
address to the new actor.
Currently, per FIP 0048, f4
addresses may only be assigned by and in association with specific, built-in actors called address managers. Once users are able to deploy custom WebAssembly actors, this restriction will likely be relaxed in a future FIP.
As an example, suppose an address manager has an actor ID (an f0
address) 123
, and that address manager creates a new actor. Then, the f4
address of the actor created by the address manager is f4123fa3491xyz
, where f4
is the address class, 123
is the actor ID of the address manager, f
is a separator, and a3491xyz
is the arbitrary <new-actor-id>
chosen by that actor.
Wallets provide a way to securely store Filecoin, along with other digital assets. These wallets consist of a public and private key, which work similarly to a bank account number and password.
When someone sends cryptocurrency to your wallet address, the transaction is recorded on the blockchain network, and the funds are added to your wallet balance. Similarly, when you send cryptocurrency from your wallet to someone else’s wallet, the transaction is recorded on the blockchain network, and the funds are deducted from your wallet balance.
There are various types of cryptocurrency wallets, including desktop, mobile, hardware, and web-based wallets, each with its own unique features and levels of security. It’s important to choose a reputable and secure wallet to ensure the safety of your digital assets.
We do not provide technical support for any of these wallets. Please use caution when researching and using the wallets listed below. Wallets that have conducted third-party audits of their open-source code by a reputable security auditor are marked recommended below.
A hot wallet refers to any wallet that is permanently connected to the internet. They can be mobile, desktop, or browser-based. Hot wallets make it faster and easier to access digital assets but could be vulnerable to online attacks. Therefore, it is recommended to keep large balances in cold wallets and only use hot wallets to hold funds that need to be accessed frequently.
Cold wallets most commonly refer to hardware wallet devices shaped like a USB stick. They are typically offline and only connected to the internet for transactions. Accessing a cold wallet requires physical possession of the device plus knowledge of the private key, which makes them more resistant to theft. Cold wallets can be less convenient and are most useful for storing larger balances securely.
Wallets that have gone through an audit have had their codebase checked by a recognized security firm for security vulnerabilities and potential leaks. However, just because a wallet has had an audit does not mean that it’s 100% bug-proof. Be incredibly cautious when using unaudited wallets.
Never share your seed phrase, password, or private keys. Bad actors will often use social engineering tactics such as phishing emails or posing as customer service or tech support to lure users into handing over their private key or seed phrase.
If you know of a wallet that supports Filecoin, you can submit a pull request to this page and add it!
If the wallet is a mobile wallet, it must be available on both Android and iOS.
The wallet must have been audited. The results of this audit must be public.
InterPlanetary Consensus (IPC) powers planetary-scale decentralized applications (dApps) through horizontal scalability of Filecoin, Ethereum and more.
For decentralized applications (dApps), there are several key motivations to adopt scaling - performance, decentralization, security. The challenge is that these factors are known to be conflicting goals.
IPC is a scaling solution intentionally designed to achieve considerable performance, decentralization and security for dApps.
Subnets are organized in a hierarchy, with one parent subnet being able to spawn infinite child subnets. Within a hierarchical subsystem, subnets can seamlessly communicate with each other, reducing the need for cross-chain bridges.
Earlier, we talked about the challenge of scaling solutions to balance performance, security and decentralization. IPC is a standout framework that strikes a considerable balance between these factors, to achieve breakthroughs in scaling.
Highly customizable without compromising security. Most L2 scaling solutions today either inherit the L1's security features but don't have their own consensus algorithms (e.g. rollups), or do the reverse (e.g. sidechains). They are also deployed in isolation and require custom bridges or protocols to transfer assets and state between L2s that share a common L1, which are vulnerable to attacks. In contrast, IPC subnets have their own consensus algorithms, inherit security features from the parent subnet and have native cross-net communication, eliminating the need for bridges.
Here are some practical examples of how IPC improves the performance of dApps:
Distributed Computation: Spawn ephemeral subnets to run distributed computation jobs.
Coordination: Assemble into smaller subnets for decentralized orchestration with high throughput and low fees.
Localization: Leverage proximity to improve performance and operate with very low latency in geographically constrained settings.
Partition tolerance: Deploy blockchain substrates in mobile settings or other environments with limited connectivity.
With better performance, lower fees and faster transactions, IPC can rapidly improve horizontal and vertical markets with decentralized technology:
Decentralized Finance (DeFi): Enabling truly high-frequency trading and traditional backends with verifiability and privacy.
Big Data and Data Science: Multiple teams are creating global-scale distributed compute networks to enable Data Science analysis on Exabytes of decentralized stored data.
Metaverse/Gaming: Enabling real-time tracking of player interactions in virtual worlds.
DAOs: Assemble into smaller subnets for decentralized orchestration with high throughput and low fees. Partition tolerance: Deploy blockchain substrates in mobile settings or other environments with limited connectivity.
This section covers the very basics of how retrieving data works on the Filecoin network.
There are multiple ways to fetch data from a storage provider. This page covers some of the most popular methods.
Lassie is a simple retrieval client for IPFS and Filecoin. It finds and fetches your data over the best retrieval protocols available. Lassie makes Filecoin retrieval easy. While Lassie is powerful, the core functionality is expressed in a single CLI command:
Lassie also provides an HTTP interface for retrieving IPLD data from IPFS and Filecoin peers. Developers can use this interface directly in their applications to retrieve the data.
Lassie fetches content in content-addressed archive (CAR) form, so in most cases, you will need additional tooling to deal with CAR files. Lassie can also be used as a library to fetch data from Filecoin from within your application. Due to the diversity of data transport protocols in the IPFS ecosystem, Lassie is able to use the Graphsync or Bitswap protocols, depending on how the requested data is available to be fetched. One prominent use case of Lassie as a library is the Saturn Network. Saturn nodes fetch content from Filecoin and IPFS through Lassie in order to serve retrievals.
Or download and install Lassie using the Go package manager:
You now have everything you need to retrieve a file with Lassie and extract the contents with go-car
.
Retrieve
To retrieve data from Filecoin using Lassie, all you need is the CID of the content you want to download.
The video below demonstrates how Lassie can be used to render content directly from Filecoin and IPFS.
Lassie and go-car
can work together to retrieve and extract data from Filecoin. All you need is the CID of the content to download.
This command uses a |
to chain two commands together. This will work on Linux or macOS. Windows users may need to use PowerShell to use this form. Alternatively, you can use the commands separately, as explained later on this page.
An example of fetching and extracting a single file, identified by its CID:
Basic progress information, similar to the output shown below, is displayed:
The resulting file is a tar archive:
Lassie CLI usage
Lassie's usage for retrieving data is as follows:
-p
is an optional flag that tells Lassie that you would like to see detailed progress information as it fetches your data.
For example:
-o
is an optional flag that tells Lassie where to write the output to. If you don’t specify a file, it will append .car
to your CID and use that as the output file name.
If you specify -p
, the output will be written to stdout
so it can be piped to another command, such as go-car
, or redirected to a file.
<CID>/path/to/content
is the CID of the content you want to retrieve and an optional path to a specific file within that content. Example:
A CID is always necessary, and if you don’t specify a path, Lassie will attempt to download the entire content. If you specify a path, Lassie will only download that specific file or, if it is a directory, the entire directory and its contents.
go-car CLI usage
The car extract
command can be used to extract files and directories from a CAR:
-f
is an optional flag that tells go-car
where to read the input from. If omitted, it will read from stdin
, as in our example above where we piped lassie fetch -o -
output to car extract
.
/path/to/file/or/directory
is an optional path to a specific file or directory within the CAR. If omitted, it will attempt to extract the entire CAR.
<OUTPUT_DIR>
is an optional argument that tells go-car
where to write the output to. If omitted, it will be written to the current directory.
If you supply -p
, as in the above example, it will attempt to extract the content directly to stdout
. This will only work if we are extracting a single file.
In the example above, where we fetched a file named lidar-data.tar
, the >
operator was used to redirect the output of car extract
to a named file. This is because the content we fetched was raw file data that did not have a name encoded. In this case, if we didn’t use -
and > filename
, go-car
would write to a file named unknown
. In this instance, go-car
was used to reconstitute the file from the raw blocks contained within Lassie’s CAR output.
go-car
has other useful commands. The first is car ls
, which can be used to list the contents of a CAR. The second is car inspect
, which can be used to inspect the contents of the CAR and optionally verify the integrity of a CAR.
And there we have it! Downloading and managing data from Filecoin is super simple when you use Lassie and Go-car!
The Lassie HTTP daemon is an HTTP interface for retrieving IPLD data from IPFS and Filecoin peers. It fetches content from peers known to have it and provides the resulting data in CAR format.
MetaMask is a popular browser extension that allows users to interact with blockchain applications. This guide shows you how to configure MetaMask to work with the Filecoin
ChainID.network is a website that lets users easily connect their wallets to EVM-compatible blockchains. ChainID is the simplest way to add the Filecoin network to your MetaMask wallet.
If you can't or don't want to use ChainID, you can add the Filecoin network to your MetaMask manually.
Before we get started, you’ll need the following:
The process for configuring MetaMask to use Filecoin is fairly simple but has some very specific variables that you must copy exactly.
Open your browser and open the MetaMask plugin. If you haven’t opened the MetaMask plugin before, you’ll be prompted to create a new wallet. Follow the prompts to create a wallet.
Click the user circle and select Settings.
Select Networks.
Click Add a network.
Scroll down and click Add a network manually.
Enter the following information into the fields:
Review the values in the fields and click Save.
The Filecoin network should now be shown in your MetaMask window.
Done!
You can now use MetaMask to interact with the Filecoin network.
Before you can connect MetaMask to your Ledger, you must install the Filecoin Ledger app on your Ledger device.
Open Ledger Live and navigate to My Ledger.
Connect your Ledger device and unlock it.
Confirm that you allow My Ledger to access your Ledger device. You can do that by clicking both buttons on your Ledger device simultaneously.
Go back to Ledger Live on your computer.
In My Ledger, head over to App catalog and search for Filecoin.
Click Install.
MetaMask requires that the Filecoin app on your Ledger device is set to Expert mode.
Open the Filecoin app on your Ledger device.
Use the buttons on your device to navigate to Expert mode.
Press both buttons simultaneously to enable Expert mode.
Once you have installed the Filecoin app on your Ledger device and enabled expert mode, you can connect your device to MetaMask.
Open your browser and open the MetaMask extension.
In the Accounts menu, select Add hardware wallet.
Select Ledger
A list of accounts should appear. Select an 0x...
account.
Done!
That's it! You've now successfully connected your Ledger device to MetaMask. When you submit any transactions through MetaMask using this account, the Filecoin Ledger app will prompt you for a confirmation on the Ledger device.
You may see a blind signing warning on your MetaMask device. This is expected, and is the reason why Expert Mode must be enabled before you can interact with the Filecoin Ledger app.
Explore the features that make Filecoin a compelling system for storing files. This is an overview of features offered by Filecoin that make it a compelling system for storing files.
Filecoin has built-in processes to check the history of files and verify that they have been stored correctly over time. Every storage provider proves that they are maintaining their files in every 24-hour window. Clients can efficiently scan this history to confirm that their files have been stored correctly, even if the client was offline at the time. Any observer can check any storage provider’s track record and will notice if the provider has been faulty or offline in the past.
In Filecoin, file storage and retrieval deals are negotiated in open markets. Anybody can join the Filecoin network without needing permission. By lowering the barriers to entry, Filecoin enables a thriving ecosystem of many independent storage providers.
Prices for storage and retrieval are determined by supply and demand, not corporate pricing departments. Filecoin makes reliable storage available at hyper-competitive prices. Miners compete based on their storage, reliability, and speed rather than through marketing or locking users in.
Because storage is paid for, Filecoin provides a viable economic reason for files to stay available over time. Files are stored on computers that are reliable and well-connected to the internet.
In Filecoin, storage providers prove their reliability through their track record published on the blockchain, not through marketing claims published by the providers themselves. Users don’t need to rely on status pages or self-reported statistics from storage providers.
Users get to choose their own tradeoffs between cost, redundancy, and speed. Users are not limited to a set group of data centers offered by their provider but can choose to store their files on any storage provider participating in Filecoin.
Filecoin resists censorship because no central provider can be coerced into deleting files or withholding service. The network is made up of many different computers run by many different people and organizations. Faulty or malicious actors are noticed by the network and removed automatically.
In Filecoin, storage providers are rewarded for providing storage, not for performing wasteful computations. Filecoin secures its blockchain using proof of file replication and proof of storage over time. It doesn’t rely on energy-intensive proof-of-work schemes like other blockchains. Miners are incentivized to amass hard drives and put them to use by storing files. Filecoin doesn’t incentivize the hoarding of graphics cards or application-specific integrated circuits for the sole purpose of mining.
Filecoin’s blockchain is designed to store large files, whereas other blockchains can typically only store tiny amounts of data, very expensively. Filecoin can provide storage to other blockchains, allowing them to store large files. In the future, mechanisms will be added to Filecoin, enabling Filecoin’s blockchain to interoperate with transactions on other blockchains.
Files are referred to by the data they contain, not by fragile identifiers such as URLs. Files remain available no matter where they are hosted or who they are hosted by. When a file becomes popular, it can be quickly distributed by swarms of computers instead of relying on a central computer, which can become overloaded by network traffic.
When multiple users store the same file (and choose to make the file public by not encrypting it), everyone who wants to download the file benefits from Filecoin, keeping it available. No matter where a file is downloaded from, users can verify that they have received the correct file and that it is intact.
Retrieval providers are computers that have good network connections to lots of users who want to download files. By prefetching popular files and distributing them to nearby users, retrieval providers are rewarded for making network traffic flow smoothly and files download quickly.
Applications implementing Filecoin can store their data on any storage provider using the same protocol. There isn’t a different API to implement for each provider. Applications wishing to support several different providers aren’t limited to the lowest-common-denominator set of features supported by all their providers.
Migrating to a different storage provider is made easier because they all offer the same services and APIs. Users aren’t locked into providers because they rely on a particular feature of the provider. Also, files are content-addressed, enabling them to be transferred directly between providers without the user having to download and re-upload the files.
Traditional cloud storage providers lock users by making it cheap to store files but expensive to retrieve them again. Filecoin avoids this by facilitating a retrieval market where providers compete to give users their files back as fast as possible, at the lowest possible price.
The code that runs both clients and storage providers is open-source. Storage providers don’t have to develop their own software for managing their infrastructure. Everyone benefits from improvements made to Filecoin’s code.
In this article, we will discuss the functions of storage providers in the Filecoin network, the role of the indexer, and the retrieval process for publicly available data.
When a storage deal is originally made, the client can opt to make the data publicly discoverable. If this is the case, the storage provider must publish an advertisement of the storage deal to the Interplanetary Network Indexer (IPNI). IPNI maps a CID to a storage provider (SP). This mapping allows clients to query the IPNI to discover where content is on Filecoin.
The IPNI also tracks which data transfer protocols you can use to retrieve specific CIDs. Currently, Filecoin SPs have the ability to serve retrievals over Graphsync, Bitswap, and HTTP. This is dependent on the SP setup.
If a client wants to retrieve publicly available data from the Filecoin network, then they generally follow this process.
Before the client can submit a retrieval deal to a storage provider, they first need to find which providers hold the data. To do this, the client sends a query to the Interplanetary Network Indexer.
Assuming the IPNI returns more than one storage provider, the client can select which provider they’d like to deal with. Here, they will also get additional details (if needed) based on the retrieval protocol they want to retrieve the content over.
The client then attempts to retrieve the data from the SP over Bitswap, Graphsync, or HTTP. Note that currently, clients can only get full-piece retrievals using HTTP.
When attempting this retrieval deal using Graphsync, payment channels are used to pay FIL to the storage provider. These payment channels watch the data flow and pay the storage provider after each chunk of data is retrieved successfully.
Once the client has received the last chunk of data, the connection is closed.
Filecoin Saturn is an open-source, community-run Content Delivery Network (CDN) built on Filecoin.
Saturn is a Web3 CDN in Filecoin’s retrieval market. On one side of the network, websites buy fast, low-cost content delivery. On the other side, Saturn node operators earn Filecoin by fulfilling requests.
Saturn is trustless, permissionless, and inclusive. Anyone can run Saturn software, contribute to the network, and earn Filecoin.
Content on Saturn is IPFS content-addressed. Every piece of content is immutable, and every response is verifiable.
Incentives unite, align, and grow the network. Node operators earn Filecoin for accelerating web content, and websites get faster content delivery for less.
A public key address.
A public key address.
If you are already running your own lotus node, you can also .
Create an issue in with the name of the wallet and its features.
is a framework that enables on-demand horizontal scalability of networks, by deploying "subnets" running different consensus algorithms depending on the application's requirements.
generally refers to the addition of nodes to a system, to increase its performance. For example, adding more nodes to a compute network helps distribute the effort needed to run a single compute task. This reduces cost per task and decreases latency, while improving overall throughput.
In web3, horizontal scalability refers to scaling blockchains, for desired performance. More specifically, scaling the ability of a blockchain to process transactions and achieve consensus, across an increasing number of users, at desired latencies and throughput. IPC is one such scaling solution, alongside other popular layer 2 solutions, like and .
It achieves scaling through the permissionless spawning of new blockchain sub-systems, which are composed of .
Subnets also have their own specific consensus algorithms, whilst leveraging security features from parent subnets. This allows dApps to use subnets for hosting sets of applications or to a single application, according to its various cost or performance needs.
Multi-chain interoperability. IPC uses the as its transaction execution layer. The FVM is a WASM-based polyglot execution environment for IPLD data and is designed to support smart contracts written in any programming language, compiled to WASM. It currently supports Filecoin and Ethereum. Today, IPC is fully compatible with Filecoin and Ethereum and can use either as a rootnet. IPC will eventually allow any chain to be taken as rootnet.
Tight storage integration with Filecoin. IPC was designed from the data-centric L1, , which is the largest decentralized storage network. IPC can leverage its storage primitives, like IPLD data integration, to deliver enhanced solutions for data availability and more.
Artificial Intelligence: IPC is fully compatible with , the world’s largest decentralized data storage. Leveraging Filecoin, IPC can enable distributed computation to power hundreds of innovative AI models.
Visit the
Read the
Check out the
Connect with the community on
Make sure that you have installed and that your GOPATH
is set up. By default, your GOPATH
will be set to ~/go
.
Install Lassie
Download the based on your system architecture.
Download the based on your system architecture or install the package using the Go package manager. The go-car package makes it easier to work with content-addressed archive (CAR) files:
A GET
query against a Lassie HTTP daemon allows retrieval from peers that have the content identified by the given root CID, streaming the DAG in the response in format. You can read more about the HTTP request and response to the daemon in . Lassie’s HTTP interface can be a very powerful tool for web applications that require fetching data from Filecoin and IPFS.
Lassie only returns data in CAR format, specifically, format. describes the nature of the CAR data returned by Lassie and the various options available to the client for manipulating the output.
Navigate to .
Search for Filecoin Mainnet
.
Click Connect Wallet.
Click Approve when prompted to Allow this site to add a network.
Click Switch network when prompted by MetaMask.
Open MetaMask from the browser extensions tab.
You should see Filecoin listed at the top.
You can now use MetaMask to interact with the Filecoin network.
Navigate to .
Search for Filecoin Calibration
.
Click Connect Wallet.
Click Approve when prompted to Allow this site to add a network.
You may be shown a warning that you are connecting to a test network. If prompted, click Accept.
Click Switch network when prompted by MetaMask.
Open MetaMask from the browser extensions tab. You should see Filecoin Calibration listed at the top.
You can now use MetaMask to interact with the Filecoin network.
Navigate to .
Search for Filecoin Local testnet
.
Click Connect Wallet.
Click Approve when prompted to Allow this site to add a network.
You may be shown a warning that you are connecting to a test network. If prompted, click Accept.
Click Switch network when prompted by MetaMask.
Open MetaMask from the browser extensions tab. You should see Filecoin Local testnet listed at the top.
You can now use MetaMask to interact with the Filecoin network.
A , or .
A browser with installed.
Pick one block explorer from the , and enter the URL into the Block explorer (optional) field.
MetaMask is compatible with the Ledger hardware wallet. Follow these instructions to connect your Filecoin addresses within MetaMask to your Ledger wallet. This guide assumes you have and installed on your computer.
For more details on the official Filecoin Ledger app, .
Here’s how they work: Developers use APIs or libraries to send data to storage helpers. Behind the scenes, storage helpers receive the data and handle the underlying processes to store it in a reliable and decentralized storage way by saving it nodes, making deals with Filecoin storage providers – or both. You can use the same APIs or other tools to retrieve data quickly.
Storage helpers are available for NFTs (non-fungible tokens) or general data. If you are storing NFTs, check out . For general data, skip to .
focuses on the enduring preservation of NFTs with a low one-time fee per. First mint your NFTs, then send us the NFT data that we preserve in endowment-backed long-term Filecoin storage. As an NFT.Storage user, you support our platform when you choose Pinata and Lighthouse for hot storage and , helping to sustain our valuable public goods. Your NFTs will also be included in the NFT Token Checker, a tool for block explorers, marketplaces and wallets to show verification that NFT collections, tokens, and CIDs are preserved by NFT.Storage.
is a free service that provides hot data storage on the decentralized Filecoin network with fast retrieval through IPFS. As of June 30, 2024, we have officially decommissioned NFT.Storage Classic uploads, however retrieval of existing data remains operational. For NFT data already uploaded through NFT.Storage Classic, the NFT.Storage Gateway makes the data retrievable on block explorers, marketplaces and dapps.
is an underlayer to Chainsafe’s encrypted IPFS & Filecoin file storage system. It offers S3-compatible bucket-style APIs for easy migration of data. As of September 2022, it’s the only storage helper with built-in encryption.
is a fast and open developer platform for . Upload any data and Web3.Storage will ensure it ends up on a decentralized set of IPFS and Filecoin storage providers. There are JavaScript and Go libraries for the API, as well as a no-code web uploader. Free and paid plans are available.
Filecoin has an active community of contributors to answer questions and help newcomers get started. There is an open dialog between users, developers, and storage providers. If you need help, you can reach the person who designed or built the system in question. Reach out on .
Find out more over at .
A multi-currency hardware wallet. Recommended.
Yes
Supports sending & receiving FIL. Can be integrated with a Ledger hardware device. Recommended.
Yes
A multi-currency software wallet built-in to the Brave browser.
Yes
A multi-currency wallet, the official wallet of Binance.
Unknown
A multi-currency wallet.
Unknown
A multi-currency wallet.
Unknown
A multi-currency mobile wallet by Filfox.
Yes
MetaMask has an extension system called Snaps.
Yes
Network name
Filecoin
New RPC URL
Either:
- https://api.node.glif.io/rpc/v1
- https://filecoin.chainup.net/rpc/v1
- https://rpc.ankr.com/filecoin
Chain ID
314
Currency symbol
FIL
Network name
Filecoin Calibration testnet
New RPC URL
Either:
- https://api.calibration.node.glif.io/rpc/v1
- https://filecoin-calibration.chainup.net/rpc/v1
Chain ID
314159
Currency symbol
tFIL
Network name
Filecoin Local testnet
New RPC URL
http://localhost:1234/rpc/v1
Chain ID
31415926
Currency symbol
tFIL
This section contains information about the Filecoin project as a whole, and how you can interact with the community.
Filecoin is a highly modular project that is itself made out of many different protocols and tools. Many of these exist as their own projects, supported by Protocol Labs. Learn more about them below.
A modular network stack, libp2p enables you to run your network applications free from runtime and address services, independently of their location. Learn more at libp2p.io/.
IPLD is the data model of the content-addressable web. It allows us to treat all hash-linked data structures as subsets of a unified information space, unifying all data models that link data with hashes as instances of IPLD. Learn more at ipld.io/.
IPFS is a distributed system for storing and accessing files, websites, applications, and data. However, it does not have support for incentivization or guarantees of this distributed storage; Filecoin provides the incentive layer. Learn more at ipfs.tech/.
The Multiformats Project is a collection of protocols which aim to future-proof systems through self-describing format values that allow for interoperability and protocol agility. Learn more at multiformats.io/.
Interactive tutorials on decentralized web protocols, designed to introduce you to decentralized web concepts, protocols, and tools. Complete code challenges right in your web browser and track your progress as you go. Explore ProtoSchool’s tutorials on Filecoin at proto.school/.
So you want to contribute to Filecoin and the ecosystem? Here is a quick listing of things to which you can contribute and an overview on how you can get started.
Filecoin and its sister-projects are big, with lots of code written in multiple languages. We always need help writing and maintaining code, but it can be daunting to just jump in. We use the label Help Wanted on features or bug fixes that people can help out with. They are an excellent place for you to start contributing code.
The biggest and most active repositories we have today are:
If you want to start contributing to the core of Filecoin, those repositories are a great place start. But the Help Wanted label exists in several related projects:
Filecoin is a huge project and undertaking, and with lots of code comes the need for lots of good documentation! However, we need a lot more help to write the awesome docs the project needs. If writing technical documentation is your area, any and all help is welcome!
Before contributing to the Filecoin docs, please read these quick guides; they’ll save you time and help keep the docs accurate and consistent!
If you have never contributed to an open-source project before, or just need a refresher, take a look at the contribution tutorial.
If interacting with people is your favorite thing to do in this world, join the Filecoin chat and discussion forums to say hello, meet others who share your goals, and connect with other members of the community. You should also consider joining Filecoin Slack.
Filecoin is designed for you to integrate into your own applications and services.
Get started by looking at the list of projects currently built on Filecoin. Build anything you think is missing! If you’re unsure about something, you can join the chat and discussion forums to get help or feedback on your specific problem/idea. You can also join a Filecoin Hackathon, apply for a Filecoin Developer Grant or apply to the Filecoin accelerator program to support the development of your project.
Filecoin is ultimately about building better protocols, and the community always welcome ideas and feedback on how to improve those protocols.
Finally, we see Protocol Labs as a research lab, where YOUR ideas can become technologies that have a real impact on the world. If you’re interested in contributing to our research, please reach out to research@protocol.ai for more information. Include what your interests are so we can make sure you get to work on something fun and valuable.
This guide explains things to keep in mind when writing for Filecoin’s documentation. While the grammar, formatting, and style guide lets you know the rules you should follow, this guide will help you to properly structure your writing and choose the correct tone for your audience.
The purpose of a walkthrough is to tell the user how to do something. They do not need to convince the reader of something or explain a concept. Walkthroughs are a list of steps the reader must follow to achieve a process or function.
The vast majority of documentation within the Filecoin documentation project falls under the Walkthrough category. Walkthroughs are generally quite short, have a neutral tone, and teach the reader how to achieve a particular process or function. They present the reader with concrete steps on where to go, what to type, and things they should click on. There is little to no conceptual information within walkthroughs.
Goals
Use the following goals when writing walkthroughs:
Audience
General
Easy for anyone to read with minimal effort.
Formality
Neutral
Slang is restricted, but standard casual expressions are allowed.
Domain
Technical
Acronyms and tech-specific language is used and expected.
Tone
Neutral
Writing contains little to no emotion.
Intent
Instruct
Tell the reader how to do something.
Function or process
The end goal of a walkthrough is for the reader to achieve a very particular function. Installing the Filecoin Desktop application is an example. Following this walkthrough isn’t going to teach the reader much about working with the decentralized web or what Filecoin is. Still, by the end, they’ll have the Filecoin Desktop application installed on their computer.
Short length
Since walkthroughs cover one particular function or process, they tend to be quite short. The estimated reading time of a walkthrough is somewhere between 2 and 10 minutes. Most of the time, the most critical content in a walkthrough is presented in a numbered list. Images and GIFs can help the reader understand what they should be doing.
If a walkthrough is converted into a video, that video should be no longer than 5 minutes.
Walkthrough structure
Walkthroughs are split into three major sections:
What we’re about to do.
The steps we need to do.
Summary of what we just did, and potential next steps.
Articles are written with the intent to inform and explain something. These articles don’t contain any steps or actions that the reader has to perform right now.
These articles are vastly different in tone when compared to walkthroughs. Some topics and concepts can be challenging to understand, so creative writing and interesting diagrams are highly sought-after for these articles. Whatever writers can do to make a subject more understandable, the better.
Article goals
Use the following goals when writing conceptual articles:
Audience
Knowledgeable
Requires a certain amount of focus to understand.
Formality
Neutral
Slang is restricted, but standard casual expressions are allowed.
Domain
Any
Usually technical, but depends on the article.
Tone
Confident and friendly
The reader must feel confident that the writer knows what they’re talking about.
Intent
Describe
Tell the reader why something does the thing that it does, or why it exists.
Article structure
Articles are separated into five major sections:
Introduction to the thing we’re about to explain.
What the thing is.
Why it’s essential.
What other topics it relates to.
Summary review of what we just read.
When writing a tutorial, you’re teaching a reader how to achieve a complex end-goal. Tutorials are a mix of walkthroughs and conceptual articles. Most tutorials will span several pages, and contain multiple walkthroughs within them.
Take the hypothetical tutorial Get up and running with Filecoin, for example. This tutorial will likely have the following pages:
A brief introduction to what Filecoin is.
Choose and install a command line client.
Understanding storage deals.
Import and store a file.
Pages 1
and 3
are conceptual articles, describing particular design patterns and ideas to the reader. All the other pages are walkthroughs instructing the user how to perform one specific action.
When designing a tutorial, keep in mind the walkthroughs and articles that already exist, and note down any additional content items that would need to be completed before creating the tutorial.
Here are some language-specific rules that the Filecoin documentation follows. If you use a writing service like Grammarly, most of these rules are turned on by default.
While Filecoin is a global project, the fact is that American English is the most commonly used style of English used today. With that in mind, when writing content for the Filecoin project, use American English spelling. The basic rules for converting other styles of English into American English are:
Swap the s
for a z
in words like categorize and pluralize.
Remove the u
from words like color and honor.
Swap tre
for ter
in words like center.
In a list of three or more items, follow each item except the last with a comma ,
:
One, two, three, and four.
One, two, three and four.
Henry, Elizabeth, and George.
Henry, Elizabeth and George.
As a proper noun, the name “Filecoin” (capitalized) should be used only to refer to the overarching project, to the protocol, or to the project’s canonical network:
Filecoin [the project] has attracted contributors from around the globe! Filecoin [the protocol] rewards contributions of data storage instead of computation! Filecoin [the network] is currently storing 50 PiB of data!
The name can also be used as an adjective:
The Filecoin ecosystem is thriving! I love contributing to Filecoin documentation!
When referring to the token used as Filecoin’s currency, the name FIL
, is preferred. It is alternatively denoted by the Unicode symbol for an integral with a double stroke ⨎:
Unit prefix: 100 FIL.
Symbol prefix: ⨎ 100.
The smallest and most common denomination of FIL is the attoFIL
(10^-18 FIL).
The collateral for this storage deal is 5 FIL. I generated ⨎100 as a storage provider last month!
Examples of discouraged usage:
Filecoin rewards storage providers with Filecoin. There are many ways to participate in the filecoin community. My wallet has thirty filecoins.
Consistency in the usage of these terms helps keep these various concepts distinct.
Lotus is the main implementation of Filecoin. As such, it is frequently referenced in the Filecoin documentation. When referring to the Lotus implementation, use a capital L. A lowercase l should only be used when referring to the Lotus executable commands such as lotus daemon
. Lotus executable commands should always be within code blocks:
If you have to use an acronym, spell the full phrase first and include the acronym in parentheses ()
the first time it is used in each document. Exception: This generally isn’t necessary for commonly-encountered acronyms like IPFS, unless writing for a stand-alone article that may not be presented alongside project documentation.
Virtual Machine (VM), Decentralized Web (DWeb).
How the Markdown syntax looks, and code formatting rules to follow.
The Filecoin Docs project follows the GitHub Flavoured Markdown syntax for markdown. This way, all articles display properly within GitHub itself.
We use the rules set out in the VSCode Markdownlint extension. You can import these rules into any text editor like Vim or Sublime. All rules are listed within the Markdownlint repository.
We highly recommend installing VSCode with the Markdownlint extension to help with your writing. The extension shows warnings within your markdown whenever your copy doesn’t conform to a rule.
The following rules explain how we organize and structure our writing. The rules outlined here are in addition to the rules found within the Markdownlinter extension.
The following rules apply to editing and styling text.
Titles
All titles follow sentence structure. Only names and places are capitalized, along with the first letter of the title. All other letters are lower-case:
Every article starts with a front-matter title and description:
In the above example title:
serves as a <h1>
or #
tag. There is only ever one title of this level in each article.
Titles do not contain punctuation. If you have a question within your title, rephrase it as a statement:
Bold text
Double asterisks **
are used to define boldface text. Use bold text when the reader must interact with something displayed as text: buttons, hyperlinks, images with text in them, window names, and icons.
Italics
Underscores _
are used to define italic text. Style the names of things in italics, except input fields or buttons:
Quotes or sections of quoted text are styled in italics and surrounded by double quotes "
:
Code blocks
Tag code blocks with the syntax of the core they are presenting:
Output from command-line actions can be displayed by adding another codeblock directly after the input codeblock. Here’s an example telling the use to run go version
and then the output of that command in a separate codeblock immediately after the first:
Command-line examples can be truncated with three periods ...
to remove extraneous information:
Inline code tags
Surround directories, file names, and version numbers between inline code tags `
.
List items
All list items follow sentence structure. Only names and places are capitalized, along with the first letter of the list item. All other letters are lowercase:
Never leave Nottingham without a sandwich.
Brian May played guitar for Queen.
Oranges.
List items end with a period .
, or a colon :
if the list item has a sub-list:
Charles Dickens novels:
Oliver Twist.
Nicholas Nickelby.
David Copperfield.
J.R.R Tolkien non-fiction books:
The Hobbit.
Silmarillion.
Letters from Father Christmas.
Unordered lists
Use the dash character -
for un-numbered list items:
Special characters
Whenever possible, spell out the name of the special character, followed by an example of the character itself within a code block.
Keyboard shortcuts
When instructing the reader to use a keyboard shortcut, surround individual keys in code tags:
The plus symbol +
stays outside of the code tags.
The following rules and guidelines define how to use and store images.
Alt text
All images contain alt text so that screen-reading programs can describe the image to users with limited sight:
While Filecoin shares some similarities to other file storage solutions, the protocol has significant differences that one should consider.
Filecoin combines many elements of other file storage and distribution systems. What makes Filecoin unique is that it runs on an open, peer-to-peer network while still providing economic incentives and proofs to ensure files are being stored correctly. This page compares Filecoin against other technologies that share some of the same properties.
Main use case
Storing files at hypercompetitive prices
Storing files using a familiar, widely-supported service
Pricing
Determined by a hypercompetitive open market
Set by corporate pricing departments
Centralization
Many small, independent storage providers
A handful of large companies
Reliability stats
Independently checked by the network and publicly verifiable
Companies self-report their own stats
API
Applications can access all storage providers using the Filecoin protocol
Applications must implement a different API for each storage provider
Retrieval
Competitive market for retrieving files
Typically more expensive than storing files to lock users in
Fault handling
If a file is lost, the user is refunded automatically by the network
Companies can offer users credit if files are lost or unavailable
Support
If something goes wrong, the Filecoin protocol determines what happens without human intervention
If something goes wrong, users contact the support help desk to seek resolution
Physical location
Miners located anywhere in the world
Limited to where provider’s data centres are located
Becoming a storage provider
Low barrier to entry for storage providers (computer, hard drive, internet connection)
High barrier to entry for storage providers (legal agreements, marketing, support staff)
Main use case
File storage
Payment network
Data storage
Good at storing large amounts of data inexpensively
Small amounts of data can be stored on blockchain at significant cost
Proof
Blockchain secured using proof of replication and proof of spacetime
Blockchain secured using proof of work
Consensus power
Miners with the most storage have the most power
Miners with the most computational speed have the most power
Mining hardware
Hard drives, GPUs, and CPUs
ASICs
Mining usefulness
Mining results in peoples’ files being stored
Mining results in heat
Types of provider
Storage provider, retrieval provider, repair provider
All providers perform proof of work
Uptime requirements
Storage providers rewarded for uptime, penalized for downtime
Miners can go offline without being penalized
Network status
Mainnet running since 2020
Mainnet running since 2009
Answers to your frequently asked questions on everything from Filecoin’s crypto-economics and storage expenses to hardware and networking.
Filecoin is a protocol that provides core primitives, enabling a truly trustless decentralized storage network. These primitives and features include publicly verifiable cryptographic storage proofs, cryptoeconomic mechanisms, and a public blockchain. Filecoin provides these primitives to solve the really hard problem of creating a trustless decentralized storage network.
On top of the core Filecoin protocol, there are a number of layer 2 solutions that enable a broad array of use cases and applications, many of which also use IPFS, such as Lighthouse or Tableland. Using these solutions, any use case that can be built on top of IPFS can also be built on Filecoin!
Some of the primary areas for development on Filecoin are:
Additional developer tools and layer-2 solutions and libraries that strengthen Filecoin as a developer platform and ecosystem.
IPFS apps that rely on decentralized storage solutions and want a decentralized data persistence solution as well.
Financial tools and services on Filecoin, like wallets, signing libraries, and more.
Applications that use Filecoin’s publicly verifiable cryptographic proofs in order to provide trustless and timestamped guarantees of storage to their users.
Most websites and apps make money by displaying ads. This type of income-model could be replaced with a Filecoin incentivized retrieval setup, where users pay small amounts of FIL for whatever files they’re hoping to download. Several large datasets are hosted through Amazon’s pay per download S3 buckets, which Filecoin retrieval could also easily augment or replace.
It’s going to require a major shift in how we think about the internet. At the same time, it is a very exciting shift, and things are slowly heading that way. Browser vendors like Brave, Opera, and Firefox are investing into decentralized infrastructure.
We think that the internet must return to its decentralized roots to be resilient, robust, and efficient enough for the challenges of the next several decades. Early developers in the Filecoin ecosystem are those who believe in that same vision and potential for the internet, and we’re excited to work with them to build this space.
We are still finalizing our cryptoeconomic parameters, and they will continue to evolve.
Here is a blog about Filecoin economics from December 2020: Filecoin network economics.
As Filecoin is a free market, the price will be determined by a number of variables related to the supply and demand for storage. It’s difficult to predict before launch. However, a few design elements of the network help support inexpensive storage.
Along with revenue from active storage deals, Storage Miners receive block rewards, where the expected value of winning a given block reward is proportional to the amount of storage they have on the network. These block rewards are weighted heavily towards the early days of the network (with the frequency of block rewards exponentially decaying over time). As a result, Storage Miners are relatively incentivized to charge less for storage to win more deals, which would increase their expected block reward.
Further, Filecoin introduces a concept called Verified Clients, where clients can be verified to actually be storing useful data. Storage Miners who store data from Verified Clients also increase their expected block reward. Anyone running a Filecoin-backed IPFS Pinning Services should qualify as a Verified Client. We do not have the process of verification finalized, but we expect it to be similar to submitting a GitHub profile.
Filecoin creates a hyper-competitive market for data storage. There will be many storage providers offering many prices, rather than one fixed price on the network. We expect Filecoin’s permissionless model and low barriers to entry to result in some very efficient operations and low-priced storage, but it’s impossible to say what exact prices will be until the network is live.
IPFS will continue to exist as it is, enhanced with Filecoin nodes. There are many use cases that require no financial incentive. Think of it like IPFS is HTTP, and Filecoin is a storage cloud-like S3 – only a fraction of IPFS content will be there.
People with unused storage who want to earn monetary rewards should pledge that storage to Filecoin, and clients who want guaranteed storage should store that data with Filecoin storage providers.
Lotus is the primary reference implementation for the Filecoin protocol. At this stage, we would recommend most storage providers use lotus to participate in the Filecoin network.
While the Filecoin team does not recommend a specific hardware configuration, we document various setups here. Additionally, this guide to storage mining details hardware considerations and setups for storage providers. However, it is likely that there are more efficient setups, and we strongly encourage storage providers to test and experiment to find the best combinations.
For information on Lotus requirements, see Prerequisites > Minimal requirements.
For information on Lotus full nodes and lite nodes, see Types of nodes.
There are a number of details that are still being finalized between the verified deals construction and the associated cryptoeconomic parameters.
Our aim is to allow these details to finalize before shipping, but given timelines, we’re considering enabling teams to take receipt of these drives before the parameters are set. We will publish updates on the status of the Discover project on the Filecoin blog.
For mainnet, you will need a public IP address, but it doesn’t need to be fixed (just accessible).
If you lost the data itself, then no, there’s no way to recover that, and you will be slashed for it. If the data itself is recoverable, though (say you just missed a WindowPoSt), then the Recovery process will let you regain the sector.
SDR (Stacked DRG PoRep) is confirmed and used, and we have no evidence of malicious construction. The algorithm is also going through both internal and external security audits.
If you have any information about any potential security problem or malicious construction, reach out to our team at security@filecoin.org.
Native storage extension (NSE) is one of the best candidates for a proof upgrade, and teams are working on implementation. But there are other candidates too, which are promising as well. It may be that another algorithm ends up better than NSE – we don’t know yet. Proof upgrades will arrive after the mainnet launch and will coexist.
AMD may be optimal hardware for SDR. You can see this description for more information on why.
In addition to Filecoin Discover, a number of groups are actively building tools and services to support the adoption of the Filecoin network with developers and clients. For example, check out the recordings from our Virtual Community Meetup to see updates about Textile and Starling Storage. You can also read more about some of the teams building on Filecoin through HackFS in our HackFS Week 1 Recap.
There will be off-chain order books and storage provider marketplaces – some are in development now from some teams. They will work mostly off-chain because transactions per second on-chain are not enough for the volume of usage we expect on Filecoin. These order books build on the basic deal-flow on-chain. These order books will arrive in their own development trajectory – most likely around or soon after the mainnet launch.
Currently, Filecoin’s Proof of Replication (PoRep) prefers to be run on AMD processors. See this description of Filecoin sealing for more information. More accurately, it runs much slower on Intel CPUs. It runs competitively fast on some ARM processors, like the ones in newer Samsung phones, but they lack the RAM to seal the larger sector sizes. The main reason that we see this benefit on AMD processors is due to their implementation of the SHA hardware instructions.
Storage providers will publish storage deals that they will upgrade the CC sector with, announce to the chain that they are doing an upgrade, and prove to the chain that a new sector has been sealed correctly. We expect to evolve and make this cheaper and more attractive over time after the mainnet launch.
When a committed capacity sector is added to the chain, it can upgrade to a sector with deals, extend its lifetime, or terminate through either faults or voluntary actions. While we don’t expect this to happen very often on mainnet, a storage provider may deem it rational to terminate their promise to the network and their clients, and accept a penalty for doing so.
For the first iteration of the protocol, yes. We have plans to make it cheaper and more economically attractive after mainnet with no resealing required and other perks.
The minimum duration for a deal is set in the storage provider’s ask. There’s also a practical limitation because sectors have a minimum duration (currently 180 days).
Automatic repair of faulted data is a feature we’ve pushed off until after the mainnet launch. For now, the way to ensure resiliency is to store your data with multiple storage providers, to gain some level of redundancy. If you want to learn more about how we are thinking about repair in the future, here are some notes.
To avoid extortion, always ensure you store your data with a fairly decentralized set of storage providers (and note: it’s pretty difficult for a storage provider to be sure they are the only person storing a particular piece of data, especially if you encrypt the data).
Storage providers currently provide a ‘dumb box’ interface and will serve anyone any data they have. Maybe in the future, storage providers will offer access control lists (ACLs) and logins and such, but that requires that you trust the storage provider. The recommended (and safest) approach here is to encrypt data you don’t want others to see yourself before storing it.
We have some really good ideas around ‘warm’ storage (that is mutable and provable) that we will probably implement in the near future. But for now, your app will have to treat Filecoin as an append-only log. If you want to change your data, you just write new data.
‘Warm’ storage can be done with a small amount of trust, where you make a deal with a storage provider with a start date quite far in the future. The storage provider can choose to store your data in a sector now (but they won’t get paid for proving it until the actual start date), or they can hold it for you (and even send you proofs of it on request), and you can then send them new data to overwrite it, along with a new storage deal that overwrites the previous one.
There’s a pretty large design space here, and we can do a bunch of different things depending on the levels of trust involved, the price sensitivity, and the frequency of updates clients desire.
Allocators, selected through an application process, serve as fiduciaries for the Filecoin network and are responsible for allocating DataCap to clients with valuable storage use cases.
See Filecoin Plus.
No – Filecoin creates a decentralized storage network in part by massively decreasing the barrier to entry to becoming a storage provider. Even if there were some large pools, anyone can join the network and provide storage with just a modest hardware purchase, and we expect clients to store their files with many diverse storage providers.
Also, note that world location matters for mining: many clients will prefer storage providers in specific regions of the world, so this enables lots of storage providers to succeed across the world, where there is storage demand.
If you are retrieving your data from IPFS or a remote pinning layer, retrieval should take on the order of milliseconds to seconds in the worst case. Our latest tests for retrieval from the Filecoin network directly show that a sealed sector holding data takes ~1 hour to unseal. 1-5 hours is our best real-world estimate to go from sector unsealing to delivery of the data. If you need faster data retrieval for your application, we recommend building on IPFS.
Connect with the Filecoin community in discussion forums or on IRC. The Filecoin community is active and here to answer your questions in your channel of choice.
For shorter-lived discussions, our community chat open to all on both Slack and Matrix, with bridged channels allowing you to participate in the same conversations from either platform:
For long-lived discussions and for support, please use the discussion tab on GitHub instead of Slack or Matrix. It’s easy for complex discussions to get lost in a sea of new messages on those chat platforms, and posting longer discussions and support requests on the forums helps future visitors, too.
This page will help you understand how to plan a profitable business, design a suitable storage provider architecture, and make the right hardware investments.
The Filecoin network provides decentralized data storage and makes sure data is verified, always available, and immutable. Storage providers in the Filecoin network are in charge of storing, providing content and issuing new blocks.
To become a storage provider in the Filecoin network you need a range of technical, financial and business skills. We will explain all the key concepts you need to understand in order to design a suitable architecture, make the right hardware investments, and run a profitable storage provider business.
Follow these steps to begin your storage provider journey:
Understand Filecoin economics
Plan your business
Make sure you have the right skills
Build the right infrastructure
To understand how you can run a profitable business as a Filecoin storage provider, it is important to make sure you understand the economics of Filecoin. Once you understand all core concepts, you can build out a strategy for your desired ROI.
Storage providers can also add additional value to clients when they offer certain certifications. These can enable a storage provider to charge customers additional fees for storing data in compliance with those standards, for example, HIPAA, SOC2, PCI, GDPR and others.
The hardware and other requirements for running a Filecoin storage provider business are significantly higher than regular blockchain mining operations. The mechanisms are designed this way because, in contrast to some other blockchain solutions, where you can simply configure one or more nodes to “mine” tokens, the Filecoin network’s primary goal is to provide decentralized storage for humanity’s most valuable data.
You need to understand the various earning mechanisms in the Filecoin network.
As will become clear, running a storage operation is a serious business, with client data and pledged funds at stake. You will be required to run a highly-available service, and there are automatic financial penalties if you cannot demonstrate data availability to the network. There are many things that can go wrong in a data center, on your network, on your OS, or at an application level.
You will need skilled people to operate your storage provider business. Depending on the size and complexity of your setup this can be 1 person with skills across many different domains, or multiple dedicated people or teams.
At the lowest level, you will need datacenter infrastructure. You need people capable of architecting, racking, wiring and operating infrastructure components. Alternatively, you can get it collocated, or even entirely as a service from a datacenter provider.
Take availability and suitable redundancy into consideration when choosing your datacenter or collocation provider. Any unavailability of your servers, network or storage can result in automatic financial penalties on the Filecoin network.
This section discusses the economics of Filecoin in relation to storage providers.
This page discusses the concept of collateral in Filecoin for storage providers.
As a storage provider on the network, you will have to create FIL wallets and add FIL to them. This is used to send messages to the blockchain but is also used for collateral. Providing storage capacity to the network requires you to provide FIL as collateral, which goes into a locked wallet on your Lotus instance. The Lotus documentation details the process of setting up your wallets and funding wallets for the initial setup. Filecoin uses upfront token collateral, as in proof-of-stake protocols, proportional to the storage hardware committed. This gets the best of both worlds to protect the network: attacking the network requires both acquiring and running the hardware, but it also requires acquiring large quantities of the token.
To satisfy the varied collateral needs of storage providers in a minimally burdensome way, Filecoin includes three different collateral mechanisms:
Initial pledge collateral, an initial commitment of FIL that a miner must provide with each sector.
Block rewards as collateral, a mechanism to reduce the initial token commitment by vesting block rewards over time.
Storage deal provider collateral, which aligns incentives between storage provider and client and can allow storage providers to differentiate themselves in the market.
For more detailed information about how collateral requirements are calculated, see the miner collateral section in the Filecoin spec.
When a storage provider fails to answer to the WindowsPoSt challenges within the 30-minute deadline (see Storage Proving), storage is taken offline, or any storage deal rules are broken, the provider is penalized against the provided collateral. This penalty is called slashing and means that a portion of the pledged collateral is forfeited to the f099
address from your locked or available rewards, and your storage power is reduced. The f099
address is the address where all burned FIL goes.
The amount of required collateral depends on the amount of storage pledged to the Filecoin network. The bigger volume you store, the more collateral is required. Additionally, Filecoin Plus uses a QAP multiplier to increase the collateral requirement. See Verified Deals with Filecoin Plus for more information.
The formula for the required collateral is as follows:
Collateral needed for X TiB = (Current Sector Initial Pledge) x (32) x (X TiB)
For instance, for 100 TiB at 0.20 FIL / 32 GiB sector, this means:
0.20 FIL x 32 x 100 = 640 FIL
The “Current Sector Initial Pledge" can be found on blockchain explorers like Filfox and on the Starboard dashboards.
Another cost factor in the network is gas. Storage providers not only pledge collateral for the capacity they announce on-chain. The network also burns FIL in the form of gas fees. Most activity on-chain has some level of gas involved. For storage providers, this is the case for committing sectors.
The gas fees fluctuate over time and can be followed on various websites like Filfox - Gas Statistics and Beryx - Gas Estimator.
The ecosystem does have FIL Lenders who can provide you FIL (with interest) to get you started, which you can pay back over time and with the help of earned block rewards. Every lender, though, will still require you to supply up to 20% of the required collateral. The Filecoin Virtual Machine, introduced in March 2023, enables the creation of new lending mechanisms via smart contracts.
Storage proving, known as Proof-of-Spacetime (“PoSt”), is the mechanism that the Filecoin blockchain uses to validate that storage providers are continuously providing the storage they claim. Storage providers earn block rewards each time they successfully answer a PoSt challenge.
As a storage provider, you must preserve the data for the duration of the deal, which are on-chain agreements between a client and a storage provider. As of March 2023, deals must have a minimum duration of 180 days, and maximum duration of 540 days. The latter value was chosen to balance long deal length with cryptographic security. Storage providers must be able to continuously prove the availability and integrity of the data they are storing. Every storage sector of 32 GiB or 64 GiB gets verified once in each 24 hour period. This period is called a proving period. Every proving period of 24 hours is broken down into a series of 30 minute, non-overlapping deadlines. This means there are 48 deadlines per day. Storage sectors are grouped in a partition, and assigned to a proving deadline. All storage sectors in a given partition will always be verified during the same deadline.
The cryptographic challenge for storage proving is called Window Proof-of-Spacetime (WindowPoSt). Storage providers have a deadline of 30 minutes to respond to this WindowPoSt challenge via a message on the blockchain containing a zk-SNARK proof of the verified sector. Failure to submit this proof within the 30 minute deadline, or failure to submit it at all, results in slashing. Slashing means a portion of the collateral will be forfeited to the f099 burn address and the storage power of the storage provider gets reduced. Slashing is a way to penalize storage providers who fail to meet the agreed upon standards of storage.
Filecoin is everywhere on the internet — and that includes social media. Find your favorite flavor here.
The Filecoin YouTube channel is home to a wealth of information about the Filecoin project — everything from developer demos to recordings of mining community calls — so you can explore playlists and subscribe to ones that interest and inform you.
Explore the latest news, events and other happenings on the official Filecoin Blog.
Subscribe to the Filecoin newsletter for official project updates sent straight to your inbox.
Get your Filecoin news in tweet-sized bites. Follow these accounts for the latest:
@Filecoin
for news and other updates from the Filecoin project
@ProtoSchool
for updates on ProtoSchool workshops and tutorials
Follow FilecoinOfficial on WeChat for project updates and announcements in Chinese.
This page is a quick start guide for storage providers in the Filecoin ecosystem.
Get ready to dive into the valuable resources of the storage provider documentation. This comprehensive guide offers a wealth of information about the role of storage providers in the Filecoin ecosystem, including insights into the economic aspects. You’ll also gain knowledge about the software architecture, hardware infrastructure, and the necessary skills for success.
To run a successful storage provider business, it’s crucial to understand the concept of Return on Investment (ROI) and the significance of collateral. By planning ahead and considering various factors, such as CAPEX, OPEX, network variables, and collateral requirements, you can make informed decisions that impact your business’s profitability and desired capacity.
One of the truly enriching elements of the Filecoin ecosystem lies in its vibrant community. Meet the community on the Filecoin Slack. Within this dynamic network, you’ll find a treasure trove of individuals who are eager to share their experiences and offer invaluable solutions to the challenges they’ve encountered along the way. Whether it’s navigating the intricacies of storage provider operations or overcoming hurdles on the blockchain, this supportive community stands ready to lend a helping hand. Embrace the spirit of collaboration and tap into this remarkable network.
Get ready to dive into the heart of the Filecoin network with Lotus, the leading reference implementation. As the most widely used software stack for interacting with the blockchain and operating a storage provider setup, Lotus holds the key to unlocking a world of possibilities. Seamlessly navigate the intricacies of this powerful tool and leverage its capabilities to propel your journey forward.
It’s time to roll up your sleeves and embark on a hands-on adventure. With a multitude of options at your disposal, setting up a local devnet environment is the easiest and most exciting way to kickstart your Filecoin journey. Immerse yourself in the captivating world of sealing sectors and witness firsthand how this critical process works. Feel the thrill of experimentation as you delve deeper into the inner workings of this remarkable technology.
Congratulations on taking the next leap in becoming a full-fledged storage provider! Now is the time to determine your starting capacity and architect a tailored solution to accommodate it. Equip yourself with the necessary hardware to kickstart your journey on the mainnet. Test your setup on the calibration testnet to fine-tune your skills and ensure seamless operations. Once you’re ready, brace yourself for the excitement of joining the mainnet.
As you step into the vibrant realm of the mainnet, it’s time to supercharge your storage provider capabilities with Boost. Discover the immense potential of this powerful software designed to help you secure storage deals and offer efficient data retrieval services to data owners. Unleash the full force of Boost and witness the transformative impact it has on your Filecoin journey.
Within the Filecoin network there are many programs and tools designed to enhance your storage provider setup. Uncover the power of these tools as you dive into the documentation, gaining valuable insights and expanding your knowledge. Make the best use of data programs on your path to success.
This page describes block rewards in Filecoin, where storage providers are elected to produce new blocks and earn FIL as rewards.
WinningPoSt (short for Winning Proof of SpaceTime) is the cryptographic challenge through which storage providers are rewarded for their contributions to the network. At the beginning of each epoch (1 epoch = 30 seconds), a small number of storage providers are elected by the network to mine new blocks. Each elected storage provider who successfully creates a block is granted Filecoin tokens by means of a block reward. The amount of FIL per block reward varies over time and is listed on various blockchain explorers like Filfox.
The election mechanism of the Filecoin network is based on the “storage power” of the storage providers. A minimum of 10 TiB in storage power is required to be eligible for WinningPoSt, and hence to earn block rewards. The more storage power a storage provider has, the more likely they will be elected to mine a block. This concept becomes incredibly advantageous in the context of Filecoin Plus verified deals.
The Filecoin network is composed of storage providers who offer storage capacity to the network. This capacity is used to secure the network, as it takes a significant amount of storage to take part in the consensus mechanism. This large capacity makes it impractical for a single party to reach 51% of the network power, since an attacker would need 10 EiB in storage to control the network. Therefore, it is important that the raw capacity also referred to as raw byte power, remains high. The Filecoin spec also included a baseline power above which the network yields maximum returns for the storage providers.
The graph below shows the evolution of network capacity on the Filecoin network. As can be seen, the baseline power goes up over time (and becomes exponential). This means from May 2021 to February 2023 the network yielded maximum returns for storage providers. However, in recent history, Quality Adjusted Power (QAP) has taken over as a leading indicator of relevance for the Filecoin network. QAP is the result of the multiplier when storing verified deals:
Check out the Starboard dashboard for the most up-to-date Network Storage Capacity.
As mentioned before, when the Raw Byte Power is above the Baseline Power, storage providers yield maximum returns. When building a business plan as a storage provider, it is important not to rely solely on block rewards. Block rewards are an incentive mechanism for storage providers. However, they are volatile and depend on the state of the network, which is largely beyond the control of storage providers.
The amount of FIL that is flowing to the storage provider per earned block reward is based on a combination of simple minting and baseline minting. Simple minting is the minimum amount of FIL any block will always have, which is 5.5. Baseline minting is the extra FIL on top of the 5.5 that comes from how close the Raw Byte Power is to the Baseline Power.
The below graph shows the evolution of FIL per block reward over time:
There is a positive side to releasing less FIL per block reward too. As Filecoin has a capped maximum token supply of 2 billion FIL, the slower minting rate allows for minting over a longer period. A lower circulating supply also has a positive effect on the price of FIL.
See the Crypto Economics page of this documentation and the Filecoin spec for more information.
This section covers the different types of deals in the Filecoin network, and how they relate to storage providers.
This page discusses what storage deals are, and how storage providers can prepare for them.
The real purpose of Filecoin is to store humanity’s most important information. As a storage provider, that means accepting storage deals and storing deal sectors with real data in it. As before, those sectors are either 32 GiB or 64 GiB in size and require that the data be prepared as a content archive; that is, as a CAR file..
Data preparation, which includes packaging files into size appropriate CAR files, is either done by a separate Data Preparer actor, or by storage providers acting as Data Preparers. The latter option is common for new storage providers, as they normally only have a few files that need preparation.
Data preparation can be done in various ways, depending on your use-case. Here are some valuable sources of information:
See the following video for a demonstration on Singularity:
The storage provider can (and should) keep unsealed data copies available for retrieval requests from the client. It is the same software component, Boost, that is responsible for HTTP retrievals from the client and for setting the price for retrievals.
Slashing penalizes storage providers that either fail to provide reliable uptime or act maliciously against the network. This page discusses what slashing means to storage providers.
This term encompasses a broad set of penalties which are to be paid by storage providers if they fail to provide sector reliability or decide to voluntarily exit the network. These include:
Fault fees are incurred for each day a storage provider’s sector is offline (fails to submit Proofs-of-Spacetime to the chain). Fault fees continue until the associated wallet is empty and the storage provider is removed from the network. In the case of a faulted sector, there will be an additional sector penalty added immediately following the fault fee.
Sector penalties are incurred for a faulted sector that was not declared faulted before a WindowPoSt check occurs. The sector will pay a fault fee after a Sector Penalty once the fault is detected.
Termination fees are incurred when a sector is voluntarily or involuntarily terminated and is removed from the network.
This penalty is incurred when committing consensus faults. This penalty is applied to storage providers that have acted maliciously against the network’s consensus functionality.
This page covers the various programs and services that storage providers can take part in.
Web3.storage runs on “Elastic IPFS” as the inbound storage protocol offering scalability, performance and reliability as the platform grows. It guarantees the user (typically developers) that the platform will always serve your data when you need it. In the backend the data is uploaded onto the Filecoin Network for long-term storage.
Spade automates the process of renewing storage deals on the Filecoin network, ensuring the longevity of data stored on the blockchain. This is particularly useful for datasets that need to be preserved for extended periods, far beyond the standard deal duration. By using Spade, organizations and individuals can manage and maintain their data storage deals more efficiently, guaranteeing that valuable data remains accessible and secure over time.
Many other programs and tools exist in the Filecoin community, developed by partners or storage providers. We list some examples below.
Swan is a provider of cross-chain cloud computing solutions. Developers can use its suite of tools to access resources across multiple chains.
Swan Cloud provides decentralized cloud computing solutions for Web3 projects by integrating storage, computing, and payment into one suite.
Open Panda was a platform for data researchers, analysts, students, and enthusiasts to interact with the largest open datasets in the world. Data available through the platform was stored on Filecoin, a decentralized storage network comprised of thousands of independent Storage Providers around the world.
Here is a comprehensive list of deprecated tools and projects.
CO2.Storage was a decentralized storage solution for structured data based on content-addressed data schemas. CO2.Storage primarily focused on structured data for environmental assets, such as Renewable Energy Credits, Carbon Offsets, and geospatial datasets, and mapped inputs to base data schemas (IPLD DAGs) for off-chain data (like metadata, images, attestation documents, and other assets) to promote the development of standard data schemas for environmental assets. This project was in alpha, and while many features could be considered stable, it was waiting until being feature complete to fully launch. The Filecoin Green team was actively working on this project and welcomed contributions from the community.
Filecoin Tracker was deprecated on April 20, 2024.
Here are great existing and working Filecoin dashboards that cover similar topics:
dataprograms.org listed tools, products, and incentive programs designed to drive growth and make data storage on Filecoin more accessible. It was discontinued in April 2024.
Moon Landing was designed to ramp up storage providers in the Filecoin network by enabling them to serve verified deals at scale.
Filecoin Dataset Explorer showcased data stored on the Filecoin network between 2020 and 2022, including telemetry, historical archives, Creative Commons media, entertainment archives, scientific research, and machine learning datasets. It highlighted Filecoin's capability to store large datasets redundantly, ensuring availability from multiple Storage Providers worldwide. Each dataset is identified by a unique content identifier (CID). The platform aimed to make diverse datasets accessible to users globally.
See also: Legacy Explorer (legacy.datasets.filecoin.io)
Estuary was an experimental software platform designed for sending public data to the Filecoin network, facilitating data retrieval from anywhere. It integrated IPFS and Filecoin technologies to provide a seamless end-to-end example for data storage and retrieval. When a file was uploaded, Estuary immediately made multiple storage deals with different providers to ensure redundancy and security. The software automated many aspects of deal making and retrieval, offering tools for managing connections, block storage, and deal tracking. Estuary aimed to simplify the use of decentralized storage networks for developers and users.
Estuary was discontinued in July 2023, and the website shut down in April 2024.
Big Data Exchange was a program that allowed storage providers easy access to Filecoin+ deals through an auction where Storage Providers could bid on datasets by offering to pay clients FIL to choose the bidder as their Storage Provider.
The has a set of CLI tools for more specific use-cases.
is a command-line tool to put data into CAR files, create , and even initiate deals with storage providers.
In order for storage providers to accept deals and set their deal terms, they need to install some market software, such as . This component interacts with data owners, accepts deals if they meet the configured requirements, gets a copy of the prepared data (CAR files), and puts it through the , after which it is in the state required to be proven to the network.
Many tools and platforms act as a deal making engine in front of Boost. This is the case for for instance.
One way of participating in the Filecoin network is by providing to the network. CC sectors do not contain customer data but are filled with random data when they are created. The goal for the Filecoin network is to have a distributed network of verifiers and collaborators to the network in order to run and maintain a healthy blockchain. Any public blockchain network requires enough participants in the consensus mechanism of the blockchain, in order to guarantee that transactions being logged onto the blockchain are legitimate. Because Filecoin’s consensus mechanism is based on Proof-of-Storage, we need sufficient storage providers that pledge capacity to the network, and thus take part in the consensus process. This is done via Committed Capacity sectors. This can be done in sectors of 32 GiB or 64 GiB. For more detail, see the .
Because the Filecoin network needs consistency, meaning all data stored is still available and unaltered, a storage provider is required to keep their capacity online, and be able to demonstrate to the network that the capacity is online. WindowPoSt verification is the process that checks that the provided capacity remains online. If not, a storage provider is penalized (or slashed) over the collateral FIL they provided for that capacity and their storage power gets reduced. This means an immediate reduction in capital (lost FIL), but also a reduction in future earnings because block rewards are correlated to storage power, as explained above. See , and for more information.
Providing committed capacity is the easiest way to get started as a storage provider, but the economics are very dependent on the price of FIL. If the price of FIL is low, it can be unprofitable to provide only committed capacity. The optimal FIL-price your business needs to be profitable will depend on your setup. Profitability can be increased by utilizing , along with .
Although it is possible to find your own data storage customers with valuable datasets they want to store, and have them verified through KYC () to create verified deals for , there are also programs and platforms that make it easier for storage providers to receive verified deals.
Filecoin Green aims to measure the environmental impacts of Filecoin and verifiably drive them below zero, building infrastructure along the way that allows anyone to make transparent and substantive environmental claims. The team maintains the and works with storage providers to decarbonize their operations through the . Connect with the team on Slack at , or via email at .
Singularity is an end-to-end solution for onboarding datasets to Filecoin storage providers, supporting . It offers modular compatibility with various data preparation and deal-making tools, allowing efficient processing from local or remote storage. Singularity integrates with over 40 storage solutions and introduces inline preparation, which links CAR files to their original data sources, preserving dataset hierarchies. It also supports content distribution and retrieval through multiple protocols and provides push and pull modes for deal making along with robust wallet management features.
Our Sync & Share platform a.k.a. DataDrop, empowered by Filecoin’s decentralized storage, transforms data management into an art. Creating paid deals for Storage Providers and bringing more utility to the Filecoin ecosystem. It integrates with deal engines in the back-end such as .
CIDGravity is a software-as-a-service that allows storage providers to handle dynamic pricing and client management towards your solution. It integrates with deal engines such as .
Evergreen extended the program by aiming to store open datasets forever. Standard deals had a maximum duration of 540 days, which was not long enough for valuable, open datasets that might need to be stored forever. Evergreen used the deal engine, which automatically renewed deals to extend the lifetime of the dataset on-chain.
Slingshot was a program that united Data clients, Data preparers and storage providers in a community to onboard data and share replicas of publicly valuable . Rather than providing a web interface like Estuary, Slingshot was a program that provides a workflow and tools for onboarding of large open datasets. The Slingshot Deal Engine provided deals to registered and certified storage providers. The data was prepared and uploaded using a tool called .
Snap Deals are a way to convert Committed Capacity sectors (that store no real data) into data sectors to be used for storing actual data and potentially Filecoin Plus data.
Instead of destroying a previously sealed sector and recreating a new sector that needs to be sealed, Snap Deals allow data to be ingested into CC-sectors without the requirement of re-sealing the sector.
There are two main reasons why a storage provider could be doing Snap Deals, also known as “snapping up their sectors” in the Filecoin community:
The first reason is that the 10x storage power on the same volume of data stored is a strong incentive to upgrade to verified deals for those storage providers who started out on CC-sectors and wish to upgrade to verified deals with Filecoin Plus.
The second reason applies to storage providers who decide to start sealing CC-sectors, but later then fill them with verified deals. When you start as a storage provider or when you expand your storage capacity, it might be a good idea to fill your capacity with CC-sectors in the absence of verified deals. Not only do you start earning block rewards over that capacity, but more importantly, you can plan the sealing throughput, and balance your load over the available hardware. If your sealing rate is 3 TiB/day, it makes no sense to feed 5 TiB/day into the pipeline. This creates congestion and possibly negative performance. If you are sealing 3 TiB/day for 33 days in a row, you end up with 99 TiB of sealed sectors that were sealed evenly and consistently. If you then take on a 99 TiB verified deal (accounting for 1 PiB QAP), the only thing required is to snap up the sectors.
Snapping up sectors with snap deals puts a lot less stress on the storage provider’s infrastructure. The only task that is executed from the sealing pipeline is the replica-update and prove-replica-update phase, which is similar to the PC2 process. The CPU-intensive PreCommit 1 phase is not required in this process.
Do not forget to provide the collateral funds when snapping up a verified deal. The same volume requires more collateral when it counts as Filecoin Plus data, namely 10x the collateral compared to raw storage power.
As a storage provider, you can set your business apart from the rest by offering additional services to your customers. Many new use-cases for the Filecoin network are emerging as new technologies are
One of the additional services is participation in Saturn retrieval markets. Saturn is a Web3 CDN (“content delivery network”), and will launch in stages in 2023. Saturn aims to be the biggest Web3 CDN, and biggest CDN overall with the introduction of Saturn, data stored on Filecoin is no longer limited to archive or cold storage, but can also be cached into a CDN layer for fast retrieval. Data that needs to be available quickly can then be stored on Filecoin and retrieved through Saturn. Saturn comes with 2 layers of caching, L1 and L2. L1 nodes typically run in data centers, require high availability and 10 GBs minimum connectivity. The L1 Saturn provider earns FIL through caching and serving data to clients. L2 nodes can be run via an app on desktop hardware.
Other new opportunities are emerging since the launch of FVM (Filecoin Virtual Machine) in March 2023. The FVM allows smart contracts to be executed on the Filecoin blockchain. The FVM is Ethereum-compatible (also called the FEVM) and allows for entire new use cases to be developed in the Filecoin ecosystem. Think of on-chain FIL lending as an example, but the opportunities are countless.
A next step after the introduction of FVM is Bacalhau), which will be offering Compute over Data (COD). After the introduction of a compute layer on Filecoin, Bacalhau’s COD promises to run compute jobs over the data where the data resides, at the storage provider. Today, data scientists have to transfer their datasets to compute farms in order for their AI, ML or other data processing activities to run. Bacalhau will allow them to run compute activities on the data where the data is located, thereby removing the expensive requirement to move data around. Storage providers will be able to offer - and get rewarded for - providing compute power to data scientists and other parties who want to execute COD.
Another potential service to offer is storage tiers with various performance profiles. For example, storage providers can offer hot/online storage by keeping an additional copy of the unsealed data available for immediate retrieval, as well as the sealed that has been stored on the Filecoin Network.
This section covers the architectural components and processes that storage providers should be aware of when creating their infrastructure.
This page covers how storage providers can charge for data on the Filecoin network.
Charging for data stored on your storage provider network is an essential aspect of running a sustainable business. While block rewards from the network can provide a source of income, they are highly dependent on the volatility of the price of FIL, and cannot be relied on as the sole revenue stream.
To build a successful business, it is crucial to develop a pricing strategy that is competitive, yet profitable. This will help you attract and retain customers, as well as ensure that your business succeeds in the long term. While some programs may require storage providers to accept deals for free, or bid in auctions to get a deal, it is generally advisable to charge customers for most client deals.
When developing your pricing strategy, it is important to consider the cost of sales associated with acquiring new customers. This cost consideration should include expenses related to business development, marketing, and sales, which you should incorporate into your business’ return-on-investment (ROI) calculation.
In addition to sales costs, other factors contribute to your business’ total cost of ownership. These include expenses related to backups of your setup and data, providing an access layer to ingest data and for retrievals, preparing the data when necessary, and more. Investigating these costs is essential to ensure your pricing is competitive, yet profitable.
By charging for data stored on your network, you can create a sustainable business model that allows you to invest in hardware and FIL as collateral, as well as grow your business over time. This requires skilled people capable of running a business at scale and interacting with investors, venture capitalists, and banks to secure the necessary funding for growth.
Next to the sales cost, there are other things that contribute to the total cost of ownership of your storage provider business. Think of backups of your setup and the data, providing an access layer to ingest data and for retrievals, preparing the data (if not done already), and more.
The rate at which storage providers complete the sealing pipeline process is called the sealing rate sealing capacity. This page describes considerations and advice in regards to sealing rate.
When setting up their business, storage providers must determine how fast they should seal and, thus, how much sealing hardware they should buy. In other words, the cost is an important factor in determining a storage provider’s sealing rate. For example, suppose you have an initial storage capacity of 100 TiB, which would account for 1 PiB QAP if all the sectors contain Filecoin Plus verified deals. If your sealing capacity is 2.5 TiB per day, you will seal your full 100 TiB in 40 days. Is it worth investing in double the sealing capacity to fill your storage in just 20 days? It might be if you are planning to grow way beyond 100 TiB. This is an example of the sort of cost considerations storage providers must factor in when tuning the sealing rate.
A common reason that a storage provider may want or need a faster sealing rate is customer expectations. When you take on a customer deal, there are often requirements to seal a dataset of a certain size within a certain time window. If you are a new storage provider with 2.5 TiB per day in sealing capacity, you cannot take on a deal of 2 PiB that needs to be on-chain in 1 month; at the very least, you could not take the deal using your own sealing infrastructure. Instead, you can use a Sealing-as-a-service provider, which can help you scale your sealing capabilities.
When designing their sealing pipeline, storage providers should consider bottlenecks, the grouping of similar tasks, and scaling out.
The art of building a well-balanced sealing pipeline means having the bottlenecks where you expect them to be; any non-trivial piece of infrastructure always contains some kind of bottleneck. Ideally, you should design your systems so that the PC1 process is the bottleneck. By doing this, all other components are matched to the capacity required to perform PC1. With PC1 being the most resource-intensive task in the pipeline, it makes the most sense to architect a solution around this bottleneck. Knowing exactly how much sealing capacity you can get from your PC1 servers is vital so you can match the rest of your infrastructure to this throughput.
Assuming you obtain maximum hardware utilization from your PC1 server to seal 15 sectors in parallel, and PC1 takes 3 hours on your infrastructure, that would mean a sealing rate of 3.75 TiB per day. The calculation is described below:
While a Lotus worker can run all of the various tasks in the sealing pipeline, different storage provider configurations may split tasks between workers. Because some tasks are similar in behavior and others are insignificant in terms of resource consumption, it makes sense to group like-tasks together on the same worker.
A common grouping is AddPiece (AP) and PreCommit1 (PC1) because AP essentially prepares the data for the PC1 task. If you have dedicated hardware for PreCommit2 (PC2), your scratch content will move to that other server. If you are grouping PC1 and PC2 on the same server, you won’t have the sealing scratch copied, but you will need a larger NVMe volume. Eventually, you may run out of sealing scratch space and not be able to start sealing additional sectors. Consider a very large bandwidth (40Gb or even 100Gb) between the servers that copy over the sealing space.
As PC1 is CPU-bound and PC2 is GPU-bound, this is another good reason to separate those tasks into dedicated hardware, especially if you are planning to scale. Because PC2 is GPU-bound, it makes sense to have PC2, C1, and C2 collocated on the same worker.
Another rule of thumb is to have two PC2 workers for every PC1 worker in your setup. The WaitSeed phase occurs after PC2, which locks the scratch space for a sector until C1 and C2. In order to keep sealing sectors in PC1, PC2 must have sufficient capacity. You can easily host multiple PC2 workers on a single server though, ideally with separate GPU's.
You can run multiple lotus-workers on the same GPU by splitting out theirtmp
folders. Give the environment variable TMPDIR=<folder>
to the lotus-worker.
A storage provider’s sealing capacity scales linearly with the hardware you add to it. For example, if your current setup allows for a sealing rate of 3 TiB per day, doubling the number of workers could bring you to 6 TiB per day. This requires that all components of your infrastructure are able to handle this additional throughput. Using Sealing-as-a-Service providers allows you to scale your sealing capacity without adding more hardware.
1-click deployment automation for the storage provider stack allows new storage providers to quickly learn and deploy Lotus and Boost.
It can be rather overwhelming for new storage providers to learn everything about Filecoin and the various software components. In order to help with the learning process, we provide a fully automated installation of the Lotus and Boost stack. This automation should allow anyone to go on mainnet or the Calibration testnet in no time.
This automation is still evolving and will receive more features and capabilities over time. In its current state, it lets you:
Install and configure Lotus Daemon to interact with the blockchain.
Initialize and configure Lotus Miner to join the network as a storage provider.
Install and configure Boost to accept storage deals from clients.
Install and configure Booster-HTTP to provide HTTP-based retrievals to clients.
The initial use case of this automation is to use sealing-as-a-service instead of doing your own sealing. As such, there is no Lotus Worker configured for the setup. It is possible to extend the setup with a remote worker. However, this Lotus Worker will require dedicated and custom hardware.
One of the next features coming to this automation is a composable deployment method. Today Lotus Daemon, Lotus Miner, and Boost are all installed on a single machine. Many production setups, however, will split out those services into their own dedicated hardware. A composable deployment will allow you to deploy singular components on separate servers.
Read the README
carefully on the GitHub repo to make sure you have all the required prerequisites in place.
This section covers various infrastructure considerations that storage providers should be aware of.
This page covers the potential return-on-investment (ROI) for storage providers (SPs) and how each SP can calculate their ROI.
Calculating the Return-on-Investment (ROI) of your storage provider business is essential to determine the profitability and sustainability of your operations. The ROI indicates the return or profit on your investment relative to the cost of that investment. There are several factors to consider when calculating the ROI of a storage provider business.
First, the cost of the initial hardware investment and the collateral in FIL required to participate in the network must be considered. These costs are significant and will likely require financing from investors, venture capitalists, or banks.
Second, the income generated from the block rewards must be factored into the ROI calculation. However, this income is subject to the volatility of the FIL token price, which can be highly unpredictable.
Third, it is important to consider the cost of sales when calculating the ROI. Sales costs include the cost of acquiring new customers, marketing, and any fees associated with payment processing. These costs can vary depending on the sales strategy and the size of the business.
Fourth, the total cost of ownership must be considered. This includes the cost of backups, providing access to ingest and retrieve data, preparing the data, and any other costs associated with operating a storage provider business.
Finally, the forecasted growth of the network and the demand for storage will also impact the ROI calculation. If the network and demand for storage grow rapidly, the ROI may increase. However, if the growth is slower than anticipated, the ROI may decrease.
Overall, calculating the ROI of a storage provider business is complex and requires a thorough understanding of the costs and income streams involved. The storage provider Forecast Calculator can assist in determining the ROI by accounting for various factors such as hardware costs, token price, and expected growth of the network.
Calculating the ROI of your storage provider business is important. Check out the Storage Provider Forecast Calculator for more details.
For more information and context see the following video:
It takes more variables than the cost vs. the income. In summary, the factors that influence your ROI are:
Verified Deals:
How much of your total sealed capacity will be done with Verified Deals (Filecoin Plus)? Those deals give a far higher return because of the 10x multiplier that is added to your storage power and block rewards.
Committed Capacity:
How much of your total sealed capacity will be just committed capacity (CC) sectors (sometimes also called pledged capacity)? These deals give a lower return compared to verified deals but are an easy way to get started in the network. Relying solely on this to generate income is challenging though, especially when the price of FIL is low.
Sealing Capacity:
How fast can you seal sectors? Faster sealing means you can start earning block rewards earlier and add more data faster. The downside is that it requires a lot of hardware.
Deal Duration:
How long do you plan to run your storage provider? Are you taking short-term deals only, or are you in it for the long run? Taking long-term deals comes with an associated risk: if you can’t keep your storage provider online for the duration of the deals, you will get penalized. Short-term deals that require extension have the downside of higher operational costs to extend (which requires that the data be re-sealed.).
FIL Collateral pledged:
A substantial amount of FIL is needed to start accepting deals in the Filecoin network. Verified deals require more pledged collateral than CC-deals. Although the collateral is not lost if you run your storage provider business well, it does mean an upfront investment (or lending).
Hardware Investment:
Sealing, storing, and proving the data does require a significant hardware investment as a storage provider. Although relying on services like sealing-as-a-service can lower these requirements for you, it is still an investment in high-end hardware. Take the time to understand your requirements and your future plans so that you can invest in hardware that will support your business.
Operational Costs:
Last but not least there’s the ongoing monthly cost of operating the storage provider business. Both the costs for technical operations as well as business operations need to be taken into consideration.
Understanding the components of Lotus is necessary in understanding subsequent sections on sealing, and what it means to build well-balanced storage provider architecture.
The diagram below shows the major components of Lotus:
The following components are the most important to understand:
Lotus daemon
Lotus miner
Lotus worker
Boost
Click here for a compatibility matrix of the different components and the required Golang version.
The daemon is a key Lotus component that does the following:
Syncs the chain
Holds the wallets of the storage provider
The machine running the Lotus daemon must be connected to the public internet for the storage provider to function. See the Lotus documentation for more in-depth information on connectivity requirements.
Syncing the chain is a key role of the daemon. It communicates with the other nodes on the network by sending messages, which are, in turn, collected into blocks. These blocks are then collected into tipsets. Your Lotus daemon receives the messages on-chain, enabling you to maintain consensus about the state of the Filecoin network with all the other participants.
Due to the growth in the size of the chain since its genesis, it is not advised for storage providers to sync the entire history of the network. Instead, providers should use the available lightweight snapshots to import the most recent messages. One exception in which a provider would need to sync the entire chain would be to run a blockchain explorer.
Synced chain data should be stored on an SSD; however, faster NVMe drives are strongly recommended. A slow chain sync can lead to delays in critical messages being sent on-chain from your Lotus miner, resulting in the faulting of sectors and the slashing of collateral.
Another important consideration is the size of the file system and available free space. Because the Filecoin chain grows as much as 50GB a day, any available space will eventually fill up. It is up to storage providers to manage the size of the chain on disk and prune it as needed. Solutions like SplitStore (enabled by default) and compacting reduce the storage space used by the chain. Compacting involves replacing the built-up chain data with a recent lightweight snapshot.
Another key role of the Lotus daemon is to host the Filecoin wallets that are required to run a storage provider (SP). As an SP, you will need a minimum of 2 wallets: an owner wallet and a worker wallet. A third wallet called the control wallet) is required to scale your operations in a production environment.
To keep wallets safe, providers should consider physical access, network access, software security, and secure backups. As with any cryptocurrency wallet, access to the private key means access to your funds. Lotus supports Ledger hardware wallets, the use of which is recommended, or remote wallets with lotus-wallet
on a remote machine (see remote lotus wallet for instructions). The worker and control wallets can not be kept on a hardware device because Lotus requires frequent access to those types of wallets. For instance, Lotus may require access to a worker or control wallet to send WindowPoSt proofs on-chain.
Control wallets
Control wallets are required to scale your operations in a production environment. In production, only using the general worker wallet increases the risk of message congestion, which can result in delayed message delivery on-chain and potential sector faulting, slashing, or lost block rewards. It is recommended that providers create wallets for each subprocess. There are five different types of control wallets a storage provider can create:
PoSt wallet
PreCommit wallet
Commit wallet
Publish storage deals wallet
Terminate wallet
The lotus-miner
also gets an address to which funds can/should be sent. This address can be used to pay any fees and collateral. Withdrawal from this address is only possible with the owner wallet private key.
The Lotus miner, often referred to using the daemon naming syntax lotus-miner
, is the process that coordinates most of the storage provider activities. It has 3 main responsibilities:
Storing sectors and data
Scheduling jobs
Proving the stored data
Storage Providers on the Filecoin network store sectors. There are two types of sectors that a provider may store:
Sealed sectors: these sectors may or may not actually contain data, but they provide capacity to the network, for which the provider is rewarded.
Unsealed sectors: used when storing data deals, as retrievals happen from unsealed sectors.
Originally, lotus-miner
was the component with storage access. This resulted in lotus-miner
hardware using internal disks, directly attached storage shelves like JBODs, Network-Attached-Storage (NAS), or a storage cluster. However, this design introduced a bottleneck on the Lotus miner.
More recently, Lotus has added a more scalable storage access solution in which workers can also be assigned storage access. This removes the bottleneck from the Lotus miner. Low-latency storage access is critical because of the impact on storage-proving processes.
Keeping a backup of your sealed sectors, the cache directory, and any unsealed sectors is crucial. Additionally, you should keep a backup of the sectorstore.json
file that lives under your storage path. The sectorestore.json
file is required to restore your system in the event of a failure. You can read more about the sectorstore.json
file in the lotus docs.
It is also imperative to have at least a daily backup of your lotus-miner
state. Backups can be made with:
The sectorstore.json
file, which lives under your storage path, is also required for restoration in the event of a failure. You can read more about the file in the Lotus docs.
Another key responsibility of the Lotus Miner is the scheduling of jobs for the sealing pipeline and storage proving.
One of the most important roles of lotus-miner
is the Storage proving. Both WindowPoSt and WinningPoSt processes are usually handled by the lotus-miner
process. For scalability and reliability purposes it is now also possible to run these proving processes on dedicated servers (proving workers) instead of using the Lotus miner.
The proving processes require low-latency access to sealed sectors. The proving challenge requires a GPU to run on. The resulting zkProof
will be sent to the chain in a message. Messages must arrive within 30 minutes for WindowPoSt, and 30 seconds for WinningPoSt. It is extremely important that providers properly size and configure the proving workers, whether they are using just the Lotus miner or separate workers. Additionally, dedicated wallets, described in Control wallets, should be set up for these processes.
Always check if there are upcoming proving deadlines before halting any services for maintenance. For detailed instructions, refer to the Lotus maintenance guide.
The Lotus worker is another important component of the Lotus architecture. There can be - and most likely will be - multiple workers in a single storage provider setup. Assigning specific roles to each worker enables higher throughput, sealing rate, and improved redundancy.
As mentioned above, proving tasks can be assigned to dedicated workers, and workers can also get storage access. The remaining worker tasks encompass running a sealing pipeline, which is discussed in the next section.
Boost is the market component for storage providers to interact with clients. Boost is made of several components (such as boostd, boostd-data, yugabytedb, booster-http etc.). It works as a deal-taking engine (from deals made by clients or other tools), and serves data retrievals to clients who request a copy of the data over graphsync, bitswap or http.
Boost has become a critical component in the software stack of a storage provider and it is therefore necessary to read the Boost documentation carefully.
Boost requires YugabyteDB as of version 2.0. Plan your deployment so that you understand the concepts of Yugabyte well enough. See the Boost documentation for more details.
The following commands can help storage providers with their setup.
It is imperative to have at least one daily backup of your Lotus miner state. Backups can be made using the following command:
You can use the following command to view wallets and their funds:
Run the following command to check the storage configuration for your Lotus miner instance:
This command return information on your sealed space and your scratch space, otherwise known as a cache. These spaces are only available if you have properly configured your Lotus miner by following the steps described in the Lotus documentation.
In some cases it might be useful to check if the system has access to the storage paths to a certain sector. To check the storage paths to sector 1 for instance, use:
To view the scheduled sealing jobs, run the following:
To see the workers on which the miner can schedule jobs, run:
To check if there are upcoming proving deadlines, run the following:
InterPlanetary Network Indexer (IPNI) enables users to search for content-addressable data available from storage providers. This page discusses the implications of IPNI for storage providers.
A network indexer, also referred to as an indexer node or indexer, is a node that maps content identifiers (CIDs) to records of who has the data and how to retrieve that data. These records are called provider data records. Indexers are built to scale in environments with massive amounts of data, like the Filecoin network, and are also used by the IPFS network to locate data. Because the Filecoin network stores so much data, clients can’t perform efficient retrieval without proper indexing. Indexer nodes work like a specialized key-value store for efficient retrieval of content-addressed data.
There are two groups of users within the network indexer process:
Storage providers advertise their available content by storing data in the indexer. This process is handled by the indexer’s ingest logic.
Retrieval clients query the indexer to determine which storage providers have the content and what protocol to use, such as Graphsync, Bitswap, etc. This process is handled by the indexer’s find logic.
This diagram summarizes the different actors in the indexer ecosystem and how they interact with each other. In this context, these actors are not the same as smart-contract actors.
Storage providers publish data to indexers so that clients can find that data using the CID or multihash of the content. When a client queries the indexer using a CID or multihash, the indexer then responds to the client with the provider data record, which tells the client where and how the content can be retrieved.
As a storage provider, you will need to run an indexer in your setup so that your clients know where and how to retrieve data. For more information on how to create an index provider, see the IPNI documentation.
This page describes how sealing-as-a-service works, and the benefits to storage providers.
Storage providers with hardware cost or availability constraints can use Sealing-as-a-service, in which another provider performs sector sealing on the storage providers behalf. This page describes how sealing-as-a-service works, and the benefits to storage providers.
In a traditional setup, a storage provider needs high-end hardware to build out a sealing pipeline. Storage providers with hardware cost or availability constraints can use Sealing-as-a-Service providers, where another provider performs sector sealing on the storage provider’s behalf. In this model, the following occurs:
The storage provider provides the data to the sealer
The sealer seals the data into sectors.
The sealer returns the sealed sectors in exchange for a service cost.
Sealing-as-a-service provides multiple benefits for storage providers:
Available storage can be filled faster, thereby maximizing block rewards, without investing in a complex, expensive sealing pipeline.
Bigger deals can be onboarded, as Sealing-as-a-Service essentially offers a burst capability in your sealing capacity. Thus, storage providers can take on larger deals without worrying about sealing time and not meeting client expectations.
Storage capacity on the Filecoin network can be expanded without investing in a larger sealing pipeline.
Other solutions are possible where the sealing partner seals committed capacity (CC) sectors for you, which you in turn snap up to data sectors.
See the following video from Aligned about their offering of Sealing-as-a-Service:
The process of sealing sectors is called the sealing pipeline. It is important for storage providers to understand the steps of the process.
Each step in the sealing process has different performance considerations, and fine-tuning is required to align the different steps optimally. For example, storage providers that don’t understand the process expected throughput may end up overloading the sealing pipeline by trying to seal too many sectors at once or taking on a dataset that is too large for available infrastructure. This can lead to a slower sealing rate, which is discussed in greater detail in Sealing Rate.
The sealing pipeline can be broken into the following steps:
The sealing pipeline begins with AddPiece (AP), where the pipeline takes a Piece and prepares it into the sealing scratch space for the PreCommit 1 task (PC1) to take over. In Filecoin, a Piece is data in CAR-file format produced by an IPLD DAG with a corresponding PayloadCID
and PieceCID
. The maximum Piece size is equal to the sector size, which is either 32 GiB or 64 GiB. If the content is larger than the sector size, it must be split into more than one PieceCID
during data preparation.
The AddPiece process is only uses some CPU cores; it doesn’t require the use of a GPU. It does write a lot of data on the sealing volume though. Therefore it is recommended to limit the concurrent AP processes to 1 or 2 via the environment variable AP_32G_MAX_CONCURRENT=1
.
It is typically co-located on a server with other worker processes from the sealing pipeline. As PC1 is the next process in the sealing pipeline, running AddPiece on the same server as the PC1 process is a logical architecture configuration.
Consider limiting the AP process to a few cores by using the taskset
command, where <xx-xx>
is the range on which cores the process needs to run on:
PreCommit 1 (PC1) is the most CPU intensive process of the entire sealing pipeline. PC1 is the step in which a sector, regardless of whether it contains data or not, is cryptographically secured. The worker process loads cryptographic parameters from a cache location, which should be stored on enterprise NVMe for latency reduction. These parameters are then used to run Proof-of-Replication (PoRep) SDR encoding against the sector that was put into the sealing scratch space. This task is single-threaded and very CPU intensive, so it requires a CPU with SHA256 extensions. Typical CPUs that meet this requirement include the AMD Epyc Milan/Rome or an Intel Xeon Ice Lake with 32 cores or more.
Using the scratch space, the PC1 task will create 11 layers of the sector. Storage providers must host scratch space for this on enterprise NVMe. This means that:
Every sector consumes memory equal to 1+11 times its size on the scratch volume.
For a 32 GiB sector, PC1 requires 384 GiB on the scratch volume
For a 64 GiB sector, PC1 requires 768 GiB.
In order to seal at a decent rate and to make use of all the sealing capacity in a PC1 server, you will maximize the concurrent PC1 jobs on a system. Set the PC1_32G_MAX_CONCURRENT=
environment variable for the PC1 worker. You can learn more about this in the chapter on Sealing Rate. Sealing several sectors multiplies the requirements on CPU cores, RAM, and scratch space by the number of sectors being sealed in parallel.
The process of sealing a single 32 GiB sector takes roughly 3 hours but that time depends largely on your hardware and what other jobs are running on that hardware.
When PC1 has completed on a given sector, the entire scratch space for that sector is moved over to the PreCommit 2 (PC2) task. This task is typically executed on a different server than the PC1 server because it behaves differently. In short, PC2 validates PC1 using the Poseidon hashing algorithm over the Merkle Tree DAG that was created in PC1. As mentioned in the previous section, the entire scratch space is either 384 GiB or 768 GiB, depending on the sector size.
Where PC1 is CPU-intensive, PC2 is executed on GPU. This task is also notably shorter in duration than PC1, typically 10 to 20 minutes on a capable GPU. This requires a GPU with at least 10 GiB of memory and 3500+ CUDA cores or shading units, in the case of Nvidia. Storage providers can use slower GPUs, but this may create a bottleneck in the sealing pipeline.
For best performance, compile Lotus with CUDA support instead of OpenCL. For further information, see the Lotus CUDA Setup.
In the case of a Snap Deal, an existing committed capacity sector is filled with data. When this happens, the entire PC1 task does not run again; however, the snapping process employs PC1’s replica-update
and prove-replica-update
to add the data to the sector. This can run on the PC2 worker or on a separate worker depending on your sealing pipeline capacity.
When PC2 has completed for a sector, a precommit message is posted on-chain. If batching is configured, Lotus will batch these messages to avoid sending messages to the chain for every single sector. In addition, there is a configurable timeout interval, after which the message will be sent on-chain. This timeout is set to 24 hours by default. These configuration parameters are found in the .lotusminer/config.toml
file.
If you want to force the pre-commit message on-chain for testing purposes, run:
The sealed sector and its 11 layers are kept on the scratch volume until Commit 2 (C2) is complete.
WaitSeed is not an actual task that is executed, but it is a step in the pipeline in which the blockchain forces the pipeline to wait for 150 epochs as a built-in security mechanism. With Filecoin’s 30 second epochs, this means 75 minutes must elapse between PC2 and the next task, Commit 1 (C1).
The Commit 1 (C1) phase is an intermediate phase that performs the preparation necessary to generate a proof. It is CPU-bound and typically completes in seconds. It is recommended that storage providers run this process on the server where C2 is running.
The last and final step in the sealing pipeline is Commit 2 (C2). This step involves the creation of zk-SNARK proof. Like PC2, this task is GPU-bound and is, therefore, best co-located with the PC2 task.
Finally, the proof is committed on-chain in a message. As with the pre-commit messages, the commit messages are batched and held for 24 hours by default before committing on-chain to avoid sending messages for each and every sector. You can again avoid batching by running:
Finally, the sealed sector is stored in the miner’s long-term storage space, along with unsealed sectors, which are required for retrievals if configured to do so.
This page covers RAID configurations, performance implications and availability, I/O behavior for sealed and unsealed sectors, and read/write performance considerations.
Storage systems use RAID for protection against data corruption and data loss. Since cost is an important aspect for storage providers, and you are dealing with cold storage mostly, you will be opting for SATA disks is RAID configurations that favor capacity (and read performance). This leads to RAID5, RAID6, RAIDZ and RAIDZ2. Double parity configurations like RAID6 and RAIDZ2 are preferred.
The width of a volume is defined by how many spindles (SATA disks) there are in a RAID group. Typical configurations range between 10+2 and 13+2 disks in a group (in a VDEV in the case of ZFS).
Although RAIDZ2 provides high fault tolerance, configuring wide VDEVs also has an impact on performance and availability. ZFS performs an automatic healing task called scrubbing which performs a checksum validation over the data and recovers from data corruption. This task is I/O intensive and might get in the way of other tasks that should get priority, like storage proving of sealed sectors.
Another implication of large RAID sets that gets aggravated with very large capacity per disk is the time it takes to rebuild. Rebuilding is the I/O intensive process that takes place when a disk in a RAID group is replaced (typically after a disk failed). If you choose to configure very wide VDEVs while using very large spindles (20TB+) you might experience very long rebuild times which again get in the way of high priority tasks like storage proving.
The unsealed copies are used for fast retrieval of the data towards the customer. Large datasets in chunks of 32 GiB (or 64 GiB depending on the configured sector size) are read.
In order to avoid different tasks competing for read I/O on disk it is recommended to create separate disk pools with their own VDEVs (when using ZFS) for sealed and unsealed copies.
Write access towards the storage also requires your attention. Depending how your storage array is connected (SAS or Ethernet) you will have different transfer speeds towards the sealed storage path. At a sealing capacity of 6 TiB/day you will effectively be writing 12 TiB/day towards the storage (6 TiB sealed, 6 TiB unsealed copies). Both your storage layout and your network need to be able to handle this.
If this 12 TiB were equally spread across the 24 hrs of a day, this would already require 1.14 Gbps.
12 TiB 1024 / 24 hr / 3600 sec 8 = 1.14 Gbps
The sealing pipeline produces 32 GiB sectors (64 GiB depending on your configured sector size) which are written to the storage. If you configured batching of the commit messages (to reduce total gas fees) then you will write multiple sectors towards disk at once.
A minimum network bandwidth of 10 Gbps is recommended and write cache at the storage layer will be beneficial too.
Read performance is optimal when choosing for RAIDZ2 VDEVs of 10 to 15 disks. RAID-sets using parity like RAIDZ and RAIDZ2 will employ all spindles for read operations. This means read throughput is a lot better compared to reading from a single or a few spindles.
There are 2 types of read operations that are important in the context of Filecoin:
random read I/O:
When storage proving happens, a small portion of a sector is read for proving.
sequential read I/O:
When retrievals happens, entire sectors are read from disk and streamed towards the customer via Boost.
This page covers topics related to internet bandwidth requirements, LAN bandwidth considerations, the use of VLANs for network traffic separation, network redundancy measures, and common topologies.
The bandwidth between different components of a network is also important, especially when transferring data between servers. The internal connectivity between servers should be at least 10 Gbps to ensure that planned sealing capacity is not limited by network performance. It is important to ensure that the servers and switches are capable of delivering the required throughput, and that firewalls are not the bottleneck for this throughput.
Virtual Local Area Networks (VLANs) are commonly used to separate network traffic and enhance security. However, if firewall rules are implemented between VLANs, the firewall can become the bottleneck. To prevent this, it is recommended to keep all sealing workers, Lotus miners, and storage systems in the same VLAN. This allows for data access and transfer without involving routing and firewalls, thus improving network performance.
Network redundancy is crucial to prevent downtime and ensure uninterrupted operations. By implementing redundancy, individual networking components can fail without disrupting the entire network. Common industry standards for network redundancy include NIC (network interface card) bonding, LACP (Link Aggregation Control Protocol), or MCLAG (Multi-Chassis Link Aggregation Group).
Depending on the size of the network, different network topologies may be used to optimize performance and scalability. For larger networks, a spine-leaf architecture may be used, while smaller networks may use a simple two-tier architecture.
Spine-leaf architectures provide predictable latency and linear scalability by having multiple L2 leaf switches that connect to the spine switches. On the other hand, smaller networks can be set up with redundant L3 switches or a collapsed spine/leaf design that connect to redundant routers/firewalls.
It is important to determine the appropriate topology based on the specific needs of the organization.
This page covers the importance of security for Filecoin storage providers, including the need to mitigate potential security threats and implement appropriate security controls.
Being a Filecoin storage provider involves more than just storing customer data. You are also responsible for managing Filecoin wallets and running systems that require 24/7 uptime to avoid losing collateral. This means that if your network or systems are compromised due to a security intrusion, you risk experiencing downtime or even losing access to your systems and storage. Therefore, maintaining proper security is of utmost importance.
As a storage provider, you must have the necessary skills and expertise to identify and mitigate potential security threats. This includes understanding common attack vectors such as phishing, malware, and social engineering. On top of that, you must be proficient at implementing appropriate security controls such as firewalls, intrusion detection and prevention systems, and access controls.
Additionally, you must also be able to keep up with the latest security trends and technologies to ensure that your systems remain secure over time. This can involve ongoing training and education, as well as staying informed about new threats and vulnerabilities.
In summary, as a Filecoin storage provider, you have a responsibility to ensure the security of your customer’s data, your own systems, and the Filecoin network as a whole. This requires a thorough understanding of security best practices, ongoing training and education, and a commitment to staying informed about the latest security trends and technologies.
When it comes to network security, it is important to have a solid first line of defense in place. One effective strategy is to implement a redundant firewall setup that can filter incoming traffic as well as traffic between your VLANs.
A next-generation firewall (NGFW) can provide even more robust security by incorporating an intrusion prevention system (IPS) at the network perimeter. This can help to detect and prevent potential threats before they can do any harm.
However, it is important to note that implementing a NGFW with IPS enabled can also have an impact on your internet bandwidth. This is because the IPS will inspect all incoming and outgoing traffic, which can slow down your network performance. As such, it is important to carefully consider your bandwidth requirements and plan accordingly.
A second layer of defense is system security. There are multiple concepts that contribute to good system security:
Host-based firewall (UFW)
Implement a host-based firewall on your systems (also called UFW on Ubuntu), which is iptables
based.
SELinux
Linux comes with an additional security implementation called SELinux
(Security Enhanced Linux). Most system administrators will not implement this by default because it takes additional consideration and administration. Once activated though it offers the highest grade of process and user isolation possible on Linux and contributes greatly to better security.
Not running as root
It is a common mistake to run processes or containers as root
. This is a serious security risk because any attacker who compromises a service running as root automatically obtains root privileges on that system.
Lotus software does not require root privileges and therefore should run under a normal account (such as a service account, for instance called lotus
) on the system.
Privilege escalation
Since it is not required that Lotus runs as root, it is also not required for the service account to have privilege escalation. This means you should not allow the lotus
account to use sudo
.
This page covers the basics of backups and disaster recovery for storage providers. A backup strategy is only as good as the last successful restore.
It is crucial to have a backup of any production system. It is even more crucial to be able to restore from that backup. These concepts are vital to a Filecoin storage provider because not only are you storing customer data for which you have (on-chain) contracts, you have also pledged a large amount of collateral for that data.
If you are unable to restore your Lotus miner and start proving your storage on-chain, you risk losing a lot of money. If you are unable to come back online in 6 weeks, you are losing all of your collateral, which will most likely lead to bankruptcy.
As such it matters less what kind of backup you have, as long as you are able to restore from it fast.
It is a common misconception to assume you are covered against any type of failure by implementing a highly available (HA) setup. HA will protect against unplanned unavailability in many cases, such as a system failure. It will not protect you against data corruption, data loss, ransomware, or a complete disaster at the datacenter level.
Backups and (tested) restores are the basis for a DR (disaster recovery) plan and should be a major point of attention for any Filecoin storage provider, regardless of your size of operation.
When planning for backup and recovery, the terms RPO and RTO are important concepts to know about.
Recovery Time Objective (RTO) is the time taken to recover a certain application or dataset in the event of a failure. Fast recovery means a shorter RTO (typically measured in hours/minutes/seconds). Enterprises plan for very short RTOs when downtime is not acceptable to their business. Application and file system snapshots typically provide the lowest possible RTO.
Recovery Point Objective (RPO) is the last known working backup from which you can recover. A shorter RPO means the time between the last backup and the failure is short. Enterprises plan for very short RPOs for systems and data that changes very often (like databases). Synchronous replication of systems and data typically provides the lowest possible RPO.
Although ‘RPO zero’ and ‘RTO zero’ are the ideal, in practice it is rarely economical. DR planning requires compromises and if you are a storage provider you need to consider cost versus RPO.
RTO is typically less concerning for storage providers. The most critical parts to recover are your sealed storage and your wallets. Wallet addresses typically do not change, so the only thing to worry about is your sealed storage. With storage level snapshots (such as ZFS snapshots), you can reduce your RTO to almost zero.
For RPO, although synchronous replication, together with snapshots, can reduce RPO to nearly zero, that is not a cost-efficient solution. Asynchronous replication of sealed storage is the most viable option if you are running at small-to-medium scale. Once you grow beyond 10PB of storage, even replicating the data will become an expensive solution.
Running a storage cluster comes with its own operational challenges though, which does not make this a good fit for small-to-medium setups.
Both storage providers and data owners (customers) should look at RPO and RTO options. As a customer, you can achieve HA/DR by having multiple copies of your data stored (and proven) across multiple storage providers. In the event of data loss at one provider, other providers will hold a copy of your data from which you can retrieve. As a customer, you choose how much redundancy you need, by doing storage deals with more providers.
RTO for data owners is a matter of how fast the storage provider(s) can provide you the data.
Do your storage providers offer “fast retrieval” of the data through unsealed copies? If not, the unsealing process (typically multiple hours) must be calculated into the RTO.
Do your storage providers pin your data on IPFS, in addition to storing it on Filecoin?
RPO for data owners is less of a concern, especially once the data is sealed. The Filecoin blockchain will enforce availability and durability of the data being stored, once it is sealed. It is therefore important, as a data owner, to know how fast your storage provider can prove the data on-chain.
A first level of protection comes from ZFS (if you are using ZFS as the file system for your storage). Having ZFS snapshots available protects you against data loss caused by human error or tech failure, and potentially even against ransomware. Other file systems typically also have a way to make snapshots, albeit not as efficiently as ZFS.
A second level of defense comes from a dedicated backup system. Not only should you have backup storage (on a different storage array than the original data), you also need to have a backup server that can at a minimum run the Lotus daemon, Lotus miner and 1 WindowPoSt worker (note: this requires a GPU). With that you can sync the chain, offer retrievals and prove your storage on-chain, from your backup system, whilst you bring your primary back online.
An alternative technique to having a dedicated backup system and copy is to have a storage cluster. This still requires a backup system to run the Lotus daemon, Lotus miner and PoST worker on. Implementing a storage cluster is usually only done for large-scale deployments as it comes with additional operational tasks.
For maximum resilience, you could host your backup system (server + storage) in a different datacenter than your primary system.
One way to prepare for an easy failover of the software components in the event of a failure is to configure floating IP addresses. Instead of pinning lotus daemon and lotus-miner to the host IP address of the server they are running on, you can configure a secondary IP address and pin the daemon to its own IP, and lotus-miner to yet another IP.
This helps to reduce the amount of manual tasks for a failover drastically. If the recovered daemon or miner instance changes IP address it requires quite a lot of reconfiguration in various places.
Having the services on a floating IP allows to assign this IP to another machine and start the service on it.
This page covers importance of understanding the Linux operating system including installation, configuration, environment variables, performance optimization, and performance analysis.
Becoming a storage provider requires a team with a variety of skills. Of all the technical skills needed to run a storage provider business, storage knowledge is important, but arguably, it is even more important to have deep understanding of the Linux operating system.
Where most enterprise storage systems (NAS, SAN and other types) do not require the administrator to have hands-on Linux experience, Filecoin does require a lot more knowledge about Linux. For starters, this is because Filecoin is not just a storage system. It is a blockchain platform that offers decentralized storage. As a storage provider, you must ensure that your production system is always available, not just providing the storage.
Although Lotus also runs on Mac, production systems generally all run on Linux. More specifically, most storage providers run on Ubuntu. Any Linux distribution should be possible but running Ubuntu makes it easier to find support in the community. Every distribution is a bit different and knowing that all components have been built and tested on Ubuntu, and knowing you have the same OS variables in your environment as someone else, lowers the barrier to starting as a storage provider significantly. Go for Ubuntu Server and choose the latest LTS version.
Install Ubuntu LTS as a headless server. This means there is no desktop environment or GUI installed. It requires you to do everything on the command line. Not having a desktop environment on your server(s) has multiple advantages:
It reduces the attack surface of your systems. Fewer packages installed means fewer patches and updates, but more importantly, fewer potential vulnerabilities.
All installation tasks and operational activities happen from the CLI. When installing and upgrading Lotus, it is recommended to build the binaries from source code. Upgrades to Lotus happen every two months or so. If you are unable to perform a mandatory Lotus upgrade, you may become disconnected from the Filecoin network, which means you could be penalized and lose money, so it’s vital to keep Lotus up-to-date.
Configuration parameters for the Lotus client are stored in 2 places:
into config.toml
files in ~/.lotus
, ~/.lotusminer
and ~/.lotusworker
into environment variables in ~/.bashrc
if you are using Bash as your shell
Lotus needs to open a lot of files simultaneously, and it is necessary to reconfigure the OS to support this.
This is one of the examples where not every Linux distribution is the same. On Ubuntu, run the following commands:
The commands used are:
This section covers the technical skills and knowledge required to become a storage provider.
This content covers various aspects related to storage in the context of being a Filecoin storage provider.
Storage is a critical component of running a successful storage provider in the Filecoin network. While it may seem obvious that having strong storage skills is important, Filecoin requires a unique end-to-end skill set to run a 24/7 application.
In addition, it is important for storage providers to understand the importance of reliable and efficient storage. Filecoin is designed to incentivize storage providers to keep data safe and secure, and as such, the storage system must be able to maintain high levels of reliability and availability.
Storage providers need to be able to implement and maintain storage infrastructure that meets the needs of clients who require large amounts of storage space. This requires knowledge of various storage technologies, as well as the ability to troubleshoot issues that may arise.
Overall, storage is a critical aspect of the Filecoin network and storage providers must have the necessary skills and knowledge to provide high-quality storage services to clients.
Zettabyte File System (ZFS) is a combined file system and logical volume manager that provides advanced features such as pooled storage, data integrity verification and automatic repair, and data compression. It is a popular choice among storage providers due to its reliability, scalability, and performance.
Configuring ZFS requires knowledge and skills that go beyond the basics of traditional file systems. As a storage provider you need to understand how ZFS manages data, including how it distributes data across disks and how it handles data redundancy and data protection. You must also know how to configure ZFS for optimal performance and how to troubleshoot issues that may arise with ZFS.
In addition to configuring ZFS, storage providers must also be able to manage the disks and other hardware used for storage. This includes selecting and purchasing appropriate hardware, installing and configuring disks and disk controllers, and monitoring disk health and performance.
Having the knowledge and skills to configure ZFS is crucial as a storage providers, as it enables you to provide reliable and high-performance storage services to your clients. Without this expertise, you may struggle to deliver the level of service that your clients expect, which could lead to decreased customer satisfaction and loss of business.
ZFS is a combined file system and volume manager, designed to work efficiently on large-scale storage systems. One of the unique features of ZFS is its built-in support for various types of RAID configurations, which makes it an ideal choice for data storage in a Filecoin network.
As a storage provider, it is crucial to have knowledge and skills in configuring ZFS. This includes understanding how to create virtual devices (VDEVs), which are the building blocks of ZFS storage pools. A VDEV can be thought of as a group of physical devices, such as hard disks, solid-state drives, or even virtual disks, that are used to store data.
In addition, storage providers must also understand how wide VDEVs should ideally be, and how to create storage pools with a specific RAID protection level. RAID is a method of protecting data by distributing it across multiple disks in a way that allows for redundancy and fault tolerance. ZFS has its own types of RAID, known as RAID-Z, which come in different levels of protection.
For example, RAIDZ2
is a configuration that provides double parity, meaning that two disks can fail simultaneously without data loss. As a storage provider, it is important to understand how to create storage pools with the appropriate level of RAID protection to ensure data durability.
Finally, creating datasets is another important aspect of ZFS configuration. Datasets are logical partitions within a ZFS storage pool that can have their own settings and attributes, such as compression, encryption, and quota. As a storage provider, it is necessary to understand how to create datasets to effectively manage storage and optimize performance.
ZFS provides built-in protection for data in the form of snapshots. Snapshots are read-only copies of a ZFS file system at a particular point in time. By taking regular snapshots, you can protect your data against accidental deletions, file corruption, or other disasters.
To ensure that your data is fully protected, it is important to configure a snapshot rotation schema. This means defining a schedule for taking snapshots and retaining them for a specified period of time. For example, you might take hourly snapshots and retain them for 24 hours, and then take daily snapshots and retain them for a week.
In addition to snapshots, ZFS also allows you to replicate them to another system running ZFS. This can be useful for creating backups or for replicating data to a remote site for disaster recovery purposes. ZFS replication works by sending incremental changes to the destination system, which ensures that only the changes are sent over the network, rather than the entire dataset. This can significantly reduce the amount of data that needs to be transferred and can help minimize network bandwidth usage.
As a storage provider, it is crucial to be able to troubleshoot and resolve any performance issues that may arise. This requires a deep understanding of the underlying storage system and the ability to use Linux performance analytic tools such as iostat
. These tools can help identify potential bottlenecks in the storage system, such as high disk utilization or slow response times.
In addition to troubleshooting, you must also be able to optimize the performance of your storage system. One way to improve performance is by implementing an NVMe write-cache. NVMe is a protocol designed specifically for solid-state drives, which can greatly improve the speed of write operations. By adding an NVMe write-cache to the storage system, you can reduce the latency of write operations and improve overall system performance.
Read-cache on the other hand is typically not useful in the context of Filecoin. This is because sealed sectors are read very randomly, and unsealed sectors will typically not be read twice. Therefore, storing data in a read-cache would be redundant and add unnecessary overhead to the system.
This page contains some reference architectures that storage providers can use to build out their infrastructure.
The following reference architecture is designed for 1 PiB of raw sectors or raw data to be stored. Let’s discuss the various design choices of this architecture.
32 CPU Cores
512 GB RAM
8x 2 TB SSD storage
2x 10 GbE ethernet NICs
Lotus daemon and Boost run as Virtual Machines in this architecture. The advantages of virtualization are well-known, including easy reconfiguration of parameters (CPU, memory, disk) and portability. The daemon is not a very intensive process by itself, but must be available at all times. We recommend having a second daemon running as another VM or on backup infrastructure to which you can fail over.
Boost is a resource-intensive process, especially when deals are being ingested over the internet. It also feeds data payload of the deals into the Lotus miner.
We recommend 12-16 cores per VM and 128 GiB of memory. Lotus daemon and Boost need to run on fast storage (SSD or faster). The capacity requirements of Boost depend on the size of deals you are accepting as a storage provider. Its capacity must be sufficient to be landing space for deals until the data can be processed by your sealing cluster in the backend.
Both Lotus daemon and Boost require public internet connectivity. In the case of Boost you also need to consider bandwidth. Depending on the deal size you are accepting, you might require 1 Gbps or 10 Gbps internet bandwidth.
16 CPU Cores
256 GB RAM
2x 1TB SSD storage
2x 10 GbE ethernet NICs
Lotus miner becomes a less intensive process with dedicated PoST workers separated from it (as in this design). If you use a dedicated storage server or NAS system as the storage target for your sealed and unsealed sectors, Lotus miner eventually could also become a VM. This requires additional CPU and memory on the hypervisor host.
We opted for a standalone Lotus miner in this design and gave it 256 GiB of memory. This is because we operate ZFS at the storage layer, which requires a lot of memory for caching. Lotus miner has enough with 128 GiB of memory when you opt for a dedicated storage server or NAS system for your storage.
In this architecture we have attached storage shelves to the Lotus miner with 2.4 PiB of usable capacity. This is the capacity after the creation of a RAIDZ2 file system (double parity). We recommend vdevs of 12 disks wide. In RAIDZ2 this results in 10 data disks and 2 parity disks. Storage systems also don’t behave well at 100% used capacity, so we designed for 20% extra capacity.
16 CPU Cores
128 GB RAM
2x 1TB SSD storage
1x GPU 10+ GB memory, 3500+ CUDA cores
2x 10 GbE ethernet NICs
We have split off the Winning PoST and Window PoST tasks from the Lotus miner. Using dedicated systems for those processes increase the likelihood of winning block rewards and reduces the likelihood of missing a proving deadline. For redundancy, you can run a standby WindowPoSt worker on the WinningPoSt server and vice versa.
PoST workers require 128 GiB of memory at the minimum and require a capable GPU with 10GB of memory and 3500 or more CUDA cores.
The sealing workers require the most attention during the design of a solution. Their performance will define the sealing rate of your setup, and hence, how fast you can onboard client deals.
AP / PC1 worker
32 CPU Cores with SHA-extensions
1 TB RAM
2x 1TB SSD OS storage
15+ TB U.3 NVMe sealing / scratch storage
2x 10 GbE (or faster) ethernet NICs
We put the AddPiece and PreCommit1 tasks together on a first worker. This makes sense because AddPiece prepares the scratch space that will be used by the PC1 tasks thereafter. The first critical hardware component for PC1 is the CPU. This must be a CPU with SHA-256 extensions. Most storage providers opt for AMD Epyc (Rome, Milan or Genova) processors, although Ice Lake and newer Intel Xeon processors also support these extensions.
To verify if your CPU has the necessary extensions, run:
PC1 is a single-threaded process so we require enough CPU cores to run multiple PC1 tasks in parallel. This reference architecture has 32 cores in a PC1, which would allow for ~30 parallel PC1 processes.
For this, we also need 1TB of memory in the PC1 server.
Every PC1 processes requires approximately 450 GiB of sealing scratch space. This scratch space is vital to the performance of the entire sealing setup. It requires U.2 or U.3 NVMe media. For 30 parallel PC1 processes we then need ~15 TiB of scratch space. RAID protection on this volume is not mandatory, however losing 30 sectors during sealing and having to start over does have an impact on your sealing rate.
PC2 / C1 / C2 workers
32 CPU Cores
512 GB RAM
2x 1TB SSD
1x GPU 10+ GB memory, 3500+ CUDA cores
2x 10 GbE (or faster)
The next step in the sealing pipeline is PreCommit2 (PC2). You could decide to keep it together with PC1, but given the size of our setup (1 PiB) and the likely requirement to scale beyond that later, we split off PC2 in this architecture.
The scratch space contents from PC1 is copied over to the PC2 worker. This PC2 worker also requires fast NVMe scratch space. Since we plan for 2 PC2 workers against 1 PC1 worker, the capacity of the scratch space per PC2 worker is half of the total scratch space capacity of the PC1 worker, 8 TiB in our case.
C1 doesn’t require much attention for our architecture. C2 however requires a capable GPU again.
This page covers the importance of network skills for a storage provider setup, including network architecture, monitoring, security, infrastructure components, and performance optimizations.
Network skills are crucial for building and maintaining a well-functioning storage provider setup. The network architecture plays a vital role in the overall performance of the storage system. Without a proper network architecture, the system can easily become bogged down and suffer from poor performance.
To ensure optimal performance, it is essential to understand where the bottlenecks in the network setup are. This requires a good understanding of network topology, protocols, and hardware. It is also important to be familiar with network monitoring tools that can help identify performance issues and optimize network traffic.
In addition, knowledge of security protocols and best practices is essential for protecting the storage provider setup from unauthorized access, data breaches, and other security threats. Understanding network security principles can help ensure the integrity and confidentiality of data stored on the network.
Overall, network skills are essential for building a high-performing, well-balanced storage provider setup. A solid understanding of network architecture, topology, protocols, and security principles can help optimize performance, prevent bottlenecks, and protect against security threats.
For example, a storage provider setup may have multiple servers that are connected to a network. If the network architecture is not designed properly, data transfer between the servers can become slow and cause delays. This can lead to poor performance and frustrated users. By understanding network architecture and designing the network properly, such bottlenecks can be avoided.
Monitoring the network is also crucial in identifying potential performance issues. Network monitoring tools can provide insights into network traffic patterns, bandwidth usage, and other metrics that can be used to optimize performance. Monitoring the network can help identify bottlenecks and areas where improvements can be made.
Network security is another important consideration for storage provider setups. A network that is not properly secured can be vulnerable to unauthorized access, data breaches, and other security threats. Network security principles such as firewalls, encryption, and access control can be used to protect the storage provider setup from these threats.
In summary, network skills are essential for building and maintaining a high-performing storage provider setup. A solid understanding of network architecture, topology, protocols, and security principles can help optimize performance, prevent bottlenecks, and protect against security threats. Monitoring the network is also crucial in identifying potential issues and ensuring smooth data flow.
Network infrastructure, including switches, routers, and firewalls, plays a crucial role in the performance, reliability, and security of any network. Having the right infrastructure in place is essential to ensuring smooth and seamless network connectivity.
Switches are essential for connecting multiple devices within a network. They direct data traffic between devices on the same network, allowing for efficient communication and data transfer. Switches come in a variety of sizes and configurations, from small desktop switches for home networks to large modular switches for enterprise networks. Choosing the right switch for your network can help ensure optimal performance and reliability.
Routers, on the other hand, are responsible for connecting different networks together. They enable communication between devices on different networks, such as connecting a home network to the internet or connecting multiple offices in a business network. Routers also provide advanced features such as firewall protection and traffic management to help ensure network security and optimize network performance.
Firewalls act as a first line of defense against external threats. They filter traffic coming into and out of a network, blocking malicious traffic and allowing legitimate traffic to pass through. Firewalls come in various forms, from hardware firewalls to software firewalls, and can be configured to block specific types of traffic or restrict access to certain parts of the network.
When it comes to network infrastructure, it’s important to choose switches, routers, and firewalls that are reliable, efficient, and secure. This means taking into account factors such as network size, bandwidth requirements, and security needs when selecting infrastructure components.
In addition to choosing the right components, it’s also important to properly configure and maintain them. This includes tasks such as setting up VLANs, implementing security features such as access control lists (ACLs), and regularly updating firmware and software to ensure optimal performance and security.
In summary, network infrastructure, including switches, routers, and firewalls, is essential for building a reliable and secure network. Whether you are building a small home network or a large-scale enterprise network, investing in the right infrastructure components and properly configuring and maintaining them can help ensure optimal network performance, reliability, and security.
Performance is a critical aspect of a storage provider setup, particularly when dealing with high network throughput requirements between multiple systems. To ensure optimal performance, it is important to use network benchmarking tools such as iperf and iperf3. These tools make it easy to test network throughput and identify bottlenecks in the network setup.
By using iperf or iperf3, you can determine the maximum network throughput between two systems. This can help you identify potential performance issues, such as network congestion or insufficient bandwidth. By running network benchmarks, you can also determine the impact of changes to the network setup, such as adding or removing hardware components.
As a storage provider, you also need to make trade-offs between performance and cost. Higher bandwidth networks typically offer better performance but come with a higher cost. Therefore, you need to perform calculations to determine whether investing in a higher bandwidth network is worth the cost.
For example, if your storage provider setup requires high network throughput, but your budget is limited, you may need to prioritize certain network components, such as switches and network cards, over others. By analyzing the performance impact of each component and comparing it to the cost, you can make informed decisions about which components to invest in.
In summary, performance is a critical aspect of a storage provider setup, particularly when dealing with high network throughput requirements. Network benchmarking tools such as iperf and iperf3 can help identify potential performance issues and optimize the network setup. To make informed decisions about the network setup, you also need to make trade-offs between performance and cost by analyzing the impact of each component and comparing it to the cost.
It is possible though to configure wider VDEVs (RAID groups) for the unsealed sectors. Physically separating sealed and unsealed copies has other advantages, which are explained in .
Storage providers keep copies of sealed sectors and unsealed sectors (for fast retrieval) on their storage systems. However the I/O behavior on sealed sectors is very different from the I/O behavior on unsealed sectors. When happens only a very small portion of the data is read by WindowPoSt. A large storage provider will have many sectors in multiple partitions for which WindowPoSt requires fast access to the disks. This is unusual I/O behavior for any storage system.
The amount of internet bandwidth required for a network largely depends on the size of the organization and customer expectations. A bandwidth between 1 Gbps and 10 Gbps is generally sufficient for most organizations, but the specific requirements should be determined based on the expected traffic. A minimum bandwidth of 10 Gbps is recommended for setups that include a node. Saturn requires a high-speed connection to handle large amounts of data.
In such cases you might want to look into storage cluster solutions with built-in redundancy. Very large storage providers will operate or other solutions with built-in erasure coding. Although this does more become more like a HA setup than a DR setup, at scale, it becomes the only economically viable option.
Do your storage providers offer retrieval through for ultra-fast retrieval?
As you will be running several tasks on GPU (see ), it’s best to avoid running a desktop environment, which might compete for resources on the GPU.
Exclude the nvidia-drivers
and cuda
packages from your updates using set. Once you have a working setup for your specific GPU, you will want to test these packages before you risk breaking them. Many storage providers may need to since some operating systems do not include this package by default.
Configuration parameters, and most environment variables, are covered in the . More specific environment variables around performance tuning can be found on the repository on GitHub.
Some storage providers fine-tune their setups by enabling CPU-core-pinning of certain tasks (especially PC1), as a starting storage provider it’s not necessary to do that level of tuning. It is essential, however, to have some level of understanding of the to know how to prioritize and deprioritize other tasks in the OS. In the case of Lotus workers you certainly want to prioritize the lotus-worker
process(es).
Diagnosing performance bottlenecks on a system is vital to keeping a well balanced .
There are many good resources to check out when it comes to Linux performance troubleshooting. Brendan Gregg’s is an excellent introduction. Each one of these commands deserves a chapter on its own but can be further researched in their man pages.
requires atypical read-behavior from a storage system. This means that the storage administrator must be able to design for this behavior and analyze the storage system accordingly.
Keep in mind that using reduces the requirements to have a fast performing sealing setup. In this design, however, we plan for an on-premise sealing setup of maximum 7 TiB/day. This theoretical sealing capacity is based on the entire sealing setup running at full speed for 24 hrs/day.
We plan for twice the amount of PC2 workers compared to PC1, as explained under . Apart from the memory requirements this process specifically requires a capable GPU with preferably 24GB of memory and 6000 or more CUDA cores.
Please take a look at the presentation Benjamin Hoejsbo from gave, in which solo storage provider setups are examined. The presentation is from 2022, but the content is still relevant as of March 2023.
We are working to improve this section. If you would like to share your mining setup, please create an issue in the !
As we are dealing with high network throughput requirements between multiple systems (to and from Boost, between the PC1 and PC2 workers and from PC2 to lotus-miner) it is worth learning to work with , which allow for easy network benchmarking.
Venus is an open-source implementation of the Filecoin network, developed by the blockchain company IPFSForce. Venus is built in Go and is designed to be fast, efficient, and scalable.
Venus is a full-featured implementation of the Filecoin protocol, providing storage, retrieval, and mining functionalities. It is compatible with the Lotus implementation and can interoperate with other Filecoin nodes on the network.
One of the key features of Venus is its support for the Chinese language and market. Venus provides a Chinese language user interface and documentation, making it easier for Chinese users to participate in the Filecoin network.
Venus also includes several advanced features, such as automatic fault tolerance, intelligent storage allocation, and decentralized data distribution. These features are designed to improve the reliability and efficiency of the storage and retrieval processes on the Filecoin network.
Here are some of the most common ways to interact with Venus:
Venus provides a comprehensive API that allows developers to interact with the Filecoin network programmatically. The API includes methods for performing various operations such as storing and retrieving data, mining blocks, and transferring FIL tokens. You can use the API to build custom applications or integrate Filecoin functionality into your existing applications.
Venus includes a powerful command-line interface that allows developers to interact with the Filecoin network from the terminal. You can use the CLI to perform various operations such as creating wallets, sending FIL transactions, and querying the network. The CLI is a quick and easy way to interact with the network and is particularly useful for testing and development purposes.
If you are interested in contributing to the development of Venus itself, you can do so by contributing to the open-source codebase on GitHub. You can submit bug reports, suggest new features, or submit code changes to improve the functionality, security, or performance of the network.
For more information about Venus, including advanced configuration, see the Venus documentation site.
This content covers the importance of understanding and meeting specific requirements, certifications, and compliance standards when working with customers in certain industries.
When working with customers from certain industries, it is important to understand that specific requirements may apply. This can include certifications and compliance standards that are necessary to meet regulatory and legal obligations. Some examples of such standards include:
HIPAA: This standard applies to the handling of medical data and is essential for healthcare providers and organizations.
SOC2: This standard applies to service providers and is used to ensure that they have adequate controls in place to protect sensitive data.
PCI-DSS: This standard applies to businesses that handle payments and ensures that they have adequate security measures in place to protect payment card data. PCI-DSS
SOX: This standard applies to businesses operating in the financial sector and is used to ensure that they have adequate controls in place to protect against fraud and financial misconduct.
GDPR: This standard applies to businesses that store personally identifiable information (PII) for European customers and is used to ensure that customer data is protected in accordance with European data privacy regulations.
Local regulations: These regulations can vary per country and are especially important to consider when doing business with government agencies.
ISO 27001: This is a security standard that provides a framework for establishing, implementing, maintaining, and continually improving an information security management system.
Having one or more of these certifications can demonstrate to customers that you have the necessary skills and expertise to handle their data and meet their regulatory requirements. This can be a valuable asset for businesses looking to work with customers in specific industries, as it can provide a competitive edge and help attract new customers. Therefore, it is important for storage providers to stay informed about industry-specific requirements and obtain relevant certifications as necessary.
This section contain information on how to spin up a full Filecoin node using Lotus, and options for using remote nodes.
This content covers the business and commercial aspects of running a storage provider business.
Running a storage provider business is not just about having technical expertise and providing storage services. It is also about building and maintaining relationships with clients, negotiating contracts, and managing finances effectively. A storage provider must be able to communicate the value of their services to potential clients, as well as ensure that current clients are satisfied and receive the support they need.
Sales skills are important for storage providers to differentiate themselves from the competition, market their services effectively, and attract new customers. This requires an understanding of the market, the needs of potential clients, and how to tailor their services to meet those needs. Storage providers should also be able to identify opportunities for growth and expansion, and have a strategy in place for pursuing those opportunities.
In addition to sales skills, financial management skills are also crucial for running a successful storage provider business. This includes budgeting, forecasting, and managing cash flow effectively. It is important for storage providers to understand the costs associated with providing their services, and to price their services appropriately in order to generate revenue and cover their expenses.
Overall, sales skills are essential for storage providers to succeed in a competitive market. By combining technical expertise with strong business and commercial skills, storage providers can build a successful and sustainable business.
Running a storage provider business involves several business aspects that require careful attention to ensure long-term success. The first and most obvious aspect is investment in hardware and FIL as collateral. Hardware is the backbone of any storage provider’s business, and ensuring that you have the right equipment to provide reliable and high-performance storage is critical. Additionally, FIL is the primary currency used within the Filecoin network, and as a storage provider, you need to ensure that you have a sufficient amount of FIL as collateral to cover your storage deals.
As your business grows, the amount of hardware and FIL needed will increase, and it is important to have a clear plan for scaling your business. This involves not only investing in additional hardware and FIL but also managing operational costs such as electricity, cooling, and maintenance. Having a skilled business team that can manage and plan for these costs is essential.
Another important aspect of running a storage provider business is managing your relationships with investors, venture capitalists, and banks. These organizations can provide much-needed funding to help grow your business, but they will only invest if they are confident in your ability to manage your business effectively. This means having a strong business plan, a skilled team, and a clear strategy for growth.
In summary, the business aspects of running a storage provider business are critical to its success. This involves managing investments in hardware and FIL, planning for scalability and managing operational costs, and building strong relationships with investors, venture capitalists, and banks.
A storage provider needs to get storage deals to grow his network power and to earn money. There are at least 2 ways to get storage deals, each one requiring specific sales skills.
Obtaining data replicas from other storage providers and programs:
Certain Filecoin data programs will specify the minimum amount of replicas needed to perform a deal. This means deals need to be stored across multiple storage providers in the ecosystem, so you can work with peers in the network to share clients’ data replicas.
Working in the ecosystem and building connections with other storage providers takes time and effort, and is essentially a sales activity.
Onboarding your own customers:
Acquiring your own customers, and bringing their data onto the Filecoin network, requires business development skills and people on your team who actively work with data owners (customers) to educate them about the advantages of decentralized storage.
It takes additional effort to work with customers and their data, but it has the additional advantage of being able to charge your customer for the data being stored. This means an additional revenue stream compared to only storing copies of deals, and earning block rewards.
Lotus is a full-featured implementation of the Filecoin network, including the storage, retrieval, and mining functionalities. It is the reference implementation of the Filecoin protocol.
There are many ways to interact with a Lotus node, depending on your specific needs and interests. By leveraging the powerful tools and APIs provided by Lotus, you can build custom applications, extend the functionality of the network, and contribute to the ongoing development of the Filecoin ecosystem.
Lotus provides a comprehensive API that allows developers to interact with the Filecoin network programmatically. The API includes methods for performing various operations such as storing and retrieving data, mining blocks, and transferring FIL tokens. You can use the API to build custom applications or integrate Filecoin functionality into your existing applications.
Lotus includes a powerful command-line interface that allows developers to interact with the Filecoin network from the terminal. You can use the CLI to perform various operations such as creating wallets, sending FIL transactions, and querying the network. The CLI is a quick and easy way to interact with the network and is particularly useful for testing and development purposes.
Lotus is designed to be modular and extensible, allowing developers to create custom plugins that add new functionality to the network. You can develop plugins that provide custom storage or retrieval mechanisms, implement new consensus algorithms, or add support for new network protocols.
If you are interested in contributing to the development of Lotus itself, you can do so by contributing to the open-source codebase on GitHub. You can submit bug reports, suggest new features, or submit code changes to improve the functionality, security, or performance of the network.
Many hosting service provide access to Lotus nodes on the Filecoin network. Check out the RPC section for more information
For more information about Lotus, including advanced configuration, check out the Lotus documentation site lotus.filecoin.io.
This page gives a very basic overview of how to install Lotus on your computer.
To install Lotus on your computer, follow these steps:
First, you need to download the appropriate binary file for your operating system. Go to the official Lotus GitHub repository and select the latest release that is compatible with your system. You can choose from Windows, macOS, and Linux distributions.
Once you have downloaded the binary file, extract the contents to a directory of your choice. For example, if you are using Linux, you can extract the contents to the /usr/local/bin directory
by running the command:
After extracting the contents, navigate to the lotus
directory in your terminal. For example, if you extracted the contents to /usr/local/bin
, you can navigate to the lotus directory by running the command:
Run the lotus
binary file to start the Lotus daemon. You can do this by running the command:
This will start the Lotus daemon, which will connect to the Filecoin network and start synchronizing with other nodes on the network.
Optionally, you can also run the lotus-miner binary file if you want to participate in the Filecoin mining process. You can do this by running the command:
This will start the Lotus miner, which will use your computer’s computing power to mine new blocks on the Filecoin network.
Lite-nodes are a simplified node option that allows developers to perform lightweight tasks on a local node. This page covers how to spin up a lite node on your local machine.
In this guide, we will use the Lotus Filecoin implementation to install a lite-node on MacOS and Ubuntu. For other Linux distributions, check out the Lotus documentation. To run a lite-node on Windows, install WSL with Ubuntu on your system and follow the Ubuntu instructions below.
Lite-nodes have relatively lightweight hardware requirements. Your machine should meet the following hardware requirements:
At least 2 GiB of RAM
A dual-core CPU.
At least 4 GiB of storage space.
To build the lite-node, you’ll need some specific software. Run the following command to install the software prerequisites:
Install the following dependencies:
Install Go and add /usr/local/go/bin
to your $PATH
variable:
Install Rust, choose the standard installation option, and source the ~/.cargo/env
config file:
Before we can build the Lotus binaries, we need to follow a few pre-build steps. MacOS users should select their CPU architecture from the tabs:
Clone the repository and move into the lotus
directory:
Retrieve the latest Lotus release version:
This should output something like:
Using the value returned from the previous command, checkout to the latest release branch:
Done! You can move on to the Build section.
Clone the repository and move into the lotus
directory:
Retrieve the latest Lotus release version:
This should output something like:
Using the value returned from the previous command, checkout to the latest release branch:
Create the necessary environment variables to allow Lotus to run on M1 architecture:
Done! You can move on to the Build section.
Clone the repository and move into the lotus
directory:
Retrieve the latest Lotus release version:
This should output something like:
Using the value returned from the previous command, checkout to the latest release branch:
If your processor was released later than an AMD Zen or Intel Ice Lake CPU, enable SHA extensions by adding these two environment variables. If in doubt, ignore this command and move on to the next section.
Done! You can move on to the Build section.
The last thing we need to do to get our node setup is to build the package. The command you need to run depends on which network you want to connect to:
Remove or delete any existing Lotus configuration files on your system:
Make the Lotus binaries and install them:
Once the installation finishes, query the Lotus version to ensure everything is installed successfully and for the correct network:
This will output something like:
Remove or delete any existing Lotus configuration files on your system:
Make the Lotus binaries and install them:
Once the installation finishes, query the Lotus version to ensure everything is installed successfully and for the correct network:
This will output something like:
Let's start the lite-node by connecting to a remote full-node. We can use the public full-nodes from glif.io:
Create an environment variable called FULLNODE_API_INFO
and set it to the WebSockets address of the node you want to connect to. At the same time, start the Lotus daemon with the --lite
tag:
This will output something like:
The Lotus daemon will continue to run in this terminal window. All subsequent commands we use should be done in a separate terminal window.
Create an environment variable called FULLNODE_API_INFO
and set it to the WebSockets address of the node you want to connect to. At the same time, start the Lotus daemon with the --lite
tag:
This will output something like:
The Lotus daemon will continue to run in this terminal window. All subsequent commands we use should be done in a separate terminal window.
To send JSON-RPC requests to our lite-node, we need to expose the API.
Open ~/.lotus/config.toml
and uncomment ListenAddress
on line 6:
Open the terminal window where your lite-node is running and press CTRL
+ c
to close the daemon.
In the same window, restart the lite-node:
This will output something like:
The Lotus daemon will continue to run in this terminal window. All subsequent commands we use should be done in a separate terminal window.
Open ~/.lotus/config.toml
and uncomment ListenAddress
on line 6:
Open the terminal window where your lite-node is running and press CTRL
+ c
to close the daemon.
In the same window, restart the lite-node:
This will output something like:
The Lotus daemon will continue to run in this terminal window. All subsequent commands we use should be done in a separate terminal window.
The lite-node is now set up to accept local JSON-RPC requests! However, we don't have an authorization key, so we won't have access to privileged JSON-RPC methods.
To access privileged JSON-RPC methods, like creating a new wallet, we need to supply an authentication key with our Curl requests.
Create a new admin token and set the result to a new LOTUS_ADMIN_KEY
environment variable:
This will output something like:
Keep this key handy. We're going to use it in the next section.
Let's run a couple of commands to see if the JSON-RPC API is set up correctly.
First, let's grab the head of the Filecoin network chain:
This will output something like:
Next, let's try to create a new wallet. Since this is a privileged method, we need to supply our auth key eyJhbGc...
:
This will output something like:
The result field is the public key for our address. The private key is stored within our lite-node.
Set the new address as the default wallet for our lite-node. Remember to replace the Bearer token with our auth key eyJhbGc...
and the "params"
value with the wallet address, f1vuc4...
, returned from the previous command:
This will output something like:
You should now have a local lite-node connected to a remote full-node with an admin API key! You can use this setup to continue playing around with the JSON-RPC, or start building your applications on Filecoin!
This page provide details on Lotus installation prerequisites and supported platforms.
Before installing Lotus on your computer, you will need to meet the following prerequisites:
Operating system: Lotus is compatible with Windows, macOS, and various Linux distributions. Ensure that your operating system is compatible with the version of Lotus you intend to install.
CPU architecture: Lotus is compatible with 64-bit CPU architectures. Ensure that your computer has a 64-bit CPU.
Memory: Lotus requires at least 8GB of RAM to run efficiently.
Storage: Lotus requires several GB of free disk space for the blockchain data, as well as additional space for the Lotus binaries and other files.
Internet connection: Lotus requires a stable and high-speed internet connection to synchronize with the Filecoin network and communicate with other nodes.
Firewall and port forwarding: Ensure that your firewall settings and port forwarding rules allow incoming and outgoing traffic on the ports used by Lotus.
Command-line interface: Lotus is primarily operated through the command line interface. Ensure that you have a basic understanding of command-line usage and are comfortable working in a terminal environment.
To get more information, check out the official Lotus documentation.
This section covers what lite-nodes are, and how developers can use them to interact with the Filecoin network.
A node providers, sometimes specifically called a remote node providers, are services that offers access to remote nodes on the Filecoin network.
Nodes are essential components of the Filecoin network. They maintain copies of the blockchain’s entire transaction history and verify the validity of new transactions and blocks. Running a node requires significant computational resources and storage capacity, which can be demanding for individual developers or teams.
Remote node providers address this challenge by hosting and maintaining Filecoin nodes on behalf of their clients. By utilizing a remote node provider, developers can access blockchain data, submit transactions, and query the network without the need to synchronize the entire blockchain or manage the infrastructure themselves. This offers convenience and scalability, particularly for applications or services that require frequent and real-time access to blockchain data.
Remote node providers typically offer APIs or other communication protocols to facilitate seamless integration with their hosted nodes. These APIs allow developers to interact with the Filecoin network, retrieve data, and execute transactions programmatically.
It’s important to note that when using a remote node provider, developers are relying on the provider’s infrastructure and trustworthiness. You should carefully choose a reliable and secure provider to ensure the integrity and privacy of their interactions with the blockchain network.
Node providers often limit the specifications of the nodes that they offer. Some developers may need particularly speedy nodes or nodes that contain the entire history of the blockchain (which can be incredibly expensive to store).
There are multiple node providers for the Filecoin mainnet and each of the testnets. Checkout the Networks section for details.
The Filecoin Virtual Machine (FVM) is a runtime environment enabling users to deploy their own smart contracts on the Filecoin blockchain. This page covers the basics of the FVM.
NOTE: As of January 2025, for developer support, please visit the FILB website. For Filecoin product updates, please visit the FILOz website or see the Lotus Github discussion page.
Filecoin’s storage and retrieval capabilities can be thought of as the base layer of the Filecoin blockchain, and FVM can be thought of as a layer on top of Filecoin that unlocks programmability on the network (e.g. programmable storage primitives).
Whereas other blockchains do have smart contract capabilities, FVM’s smart contracts can use Filecoin storage and retrieval primitives with computational logic conditions. FVM will also enable Layer 2 capabilities, such as “compute over data” and content delivery networks.
Some additional notes about FVM’s technical specifications:
WASM-based: The FVM is a WASM-based polyglot execution environment for IPLD data, meaning that FVM gives developers access to IPFS / IPLD data primitives and can accommodate smart contracts (actors) written in any programming language that compiles to WASM.
FEVM Compatibility: Are you an Ethereum / Solidity developer? You can build the next killer app on FVM and make use of the Filecoin Solidity library. Learn more about how FVM is Ethereum runtime and solidity compatible in the next section.
VM Agnostic: The FVM is built to be VM-agnostic, meaning support for other foreign VMs can be added in the near future. Future versions of FVM can serve as a useful hypervisor enabling cross run-time invocations.
FVM brings user programmability to Filecoin, unleashing the enormous potential of an open data economy through various applications.
FVM Actors enable a huge range of use cases to be built on Filecoin. Here are just a few potential examples:
Data Access Control: FVM Actors can enable a client to grant retrieval permission for certain files to a limited set of third-party Filecoin wallet addresses.
DataDAO: FVM Actors can enable the creation of decentralized autonomous organizations where members govern and manage the storage, accessibility, and monetization of certain data sets and pool returns into a shared treasury.
Perpetual Storage: Because all Filecoin storage deals are time-limited, when a client makes a deal with a storage provider to store a data set with them, the client has to begin to consider whether they will want to renew this deal for the next time-period with the same storage provider or seek out other storage providers that may be cheaper. However, FVM enables a client to automatically renew deals or find a cheaper storage provider when the time limit of a given deal has reached maturity. This automated renewal of deals can persist, even in perpetuity, for as many cycles as can be financed by an associated endowment of FIL. FVM Actors enable the creation and management of this endowment.
Replication: In addition to allowing a client to store one data set with one storage provider in perpetuity, FVM Actors enable data resiliency by allowing a client to store one data set once manually and then have the Actor replicate that data with multiple other storage providers automatically. Additional conditions that can be set in an automated replication Actor include choices about the geographic region of the storage providers, latency, and deal price limits.
Leasing: FVM Actors enable a FIL token holder to provide collateral to clients looking to do a storage deal, and be repaid the principal and interest over time. FVM Actors can also trace the borrowing and repayment history of a given client, generating a community-developed reputation score.
Additional use cases enabled by FVM include, but are not limited to, tokenized data sets, trustless reputation systems, NFTs, storage bounties and auctions, Layer 2 bridges, futures and derivatives, or conditional leasing.
If you’re ready to start building on the FVM, here are some resources you should explore:
FVM Reference Implementation: The Github repo containing the reference implementation for FVM.
FVM Quickstart Guide: The Quickstart guide will walk you through deploying your first ERC-20 contract on FVM. In addition to being provided this code, we also walk you through the developer environment set-up.
Developing Contracts: If you are ready to build your dApp on FVM, you can skip ahead and review our best practices section for developing contracts. Here, you can find a guide for the Filecoin solidity libraries, details on tools such as Foundry, Remix, and Hardhat, and tutorials for calling built-in actors and building client contracts.
The next page will walk you through the process of deciding whether you need to use FVM’s programmatic storage when building a dApp with storage on Filecoin.
Nodes are participants that contribute to the network’s operation and maintain its integrity. There are two major node implementations running on the Filecoin network today, with more in the works.
Lotus is the reference implementation of the Filecoin protocol, developed by Protocol Labs, the organization behind Filecoin. Lotus is a full-featured implementation of the Filecoin network, including the storage, retrieval, and mining functionalities. It is written in Go and is designed to be modular, extensible, and highly scalable.
Venus is an open-source implementation of the Filecoin network, developed by IPFSForce. The project is built in Go and is designed to be fast, efficient, and scalable.
Venus is a full-featured implementation of the Filecoin protocol, providing storage, retrieval, and mining functionalities. It is compatible with the Lotus implementation and can interoperate with other Filecoin nodes on the network.
One of the key features of Venus is its support for the Chinese language and market. Venus provides a Chinese language user interface and documentation, making it easier for Chinese users to participate in the Filecoin network.
While Lotus and Venus share many similarities, they differ in their development, feature sets, focus, and community support. Depending on your needs and interests, you may prefer one implementation over the other:
Both Lotus and Venus are fully compatible with the Filecoin network and can interoperate with other Filecoin nodes on the network.
While both implementations provide storage, retrieval, and mining functionalities, they differ in their feature sets. Lotus includes features such as a decentralized storage market, a retrieval market, and a built-in consensus mechanism, while Venus includes features such as automatic fault tolerance, intelligent storage allocation, and decentralized data distribution.
Lotus has a more global focus, while Venus has a stronger focus on the Chinese market. Venus provides a Chinese language user interface and documentation, making it easier for Chinese users to participate in the Filecoin network.
Forest is the Rust implementation of the Filecoin protocol with low hardware requirements (16 GiB, 4 cores), developed by ChainSafe. Forest is focused on blockchain analytics, and does not support storage, retrieval or mining.
Forest is currently used for generating up-to-date snapshots and managing archival copies of the Filecoin blockchain. Currently, the Forest team is hosting the entire Filecoin archival data for the community to use. This can be downloaded for free here.
You can learn more about Forest at the codebase on GitHub and documentation site.
Learn about the various tools and options for adding Filecoin storage to software applications, smart contracts, and workflows.
Filecoin combines the benefits of content-addressed data leveraged by IPFS with blockchain-powered storage guarantees. The network offers robust and resilient distributed storage at massively lower cost compared to current centralized alternatives.
Developers choose Filecoin because it:
is the world’s largest distributed storage network, without centralized servers or authority
offers on-chain proofs to verify and authenticate data
is highly compatible with IPFS and content addressing
is the only decentralized storage network with petabyte-scale capacity
stores data at extremely low cost (and keeps it that way for the long term)
How do Filecoin and IPFS work together? They are complementary protocols for storing and sharing data in the distributed web. Both systems are open-source and share many building blocks, including content addressing (CIDs) and network protocols (libp2p).
IPFS does not include built-in mechanisms to incentivize the storage of data for other people. To persist IPFS data, you must either run your own IPFS node or pay a provider.
This is where Filecoin comes in. Filecoin adds an incentive layer to content-addressed data. Storage deals are recorded on-chain, and providers must submit proofs of storage to the network over time. Payments, penalties, and block rewards are all enforced by the decentralized protocol.
Filecoin and IPFS are designed as separate layers to give developers more choice and modularity, but many tools are available for combining their benefits. This diagram illustrates how these tools (often called storage helpers) provide developer-friendly APIs for storing on IPFS, Filecoin, or both.
You can improve speed and reduce gas fees by storing smart contract data on Filecoin. With Filecoin, the data itself is stored off-chain, but is used to generate verifiable CIDs and storage proofs that are recorded on the Filecoin chain and can be included in your smart contracts. This design pairs well with multiple smart contract networks such as Ethereum, Polygon, Avalanche, Solana, and more. Your smart contract only needs to include the compact content ids.
Let’s get building. Choose one of the following APIs. These are all storage helpers, or tools and services that abstract Filecoin’s robust deal making processes into simple, streamlined API calls.
Chainsafe Storage API - for projects needing S3 compatibility
NFT.storage - for NFT data
Web3.storage - for general application data
Examples:
What is an IPFS Pinning Service? (Pinata explainer)
Developing on Filecoin (video)
Textile tools: video and documentation
This page details what exactly EVM compatibility means for the FVM, and any other information that Ethereum developers may need to build applications on Filecoin.
The Ethereum Virtual Machine is an execution environment initially designed, built for, and run on the Ethereum blockchain. The EVM was revolutionary because, for the first time, any arbitrary code could be deployed to and run on a blockchain. This code inherited all the decentralized properties of the Ethereum blockchain. Before the EVM, a new blockchain had to be created with custom logic and then bootstrapped with validators every time a new type of decentralized application needed to be built.
Code deployed to EVM is typically written in the high-level language Solidity, although other languages, such as Vyper, exist. The high-level Solidity code is compiled to EVM bytecode which is what is actually deployed to and run on the EVM. Due to it being the first virtual machine to run on top of a blockchain, the EVM has developed one of the strongest developer ecosystems in Web3 to date. Today, many different blockchains run their own instance of the EVM to allow developers to easily port their existing applications into the new blockchain’s ecosystem.
The Filecoin EVM, often just referred to as FEVM, is the Ethereum virtual machine virtualized as a runtime on top of the Filecoin virtual machine. It allows developers to port any existing EVM-based smart contracts straight onto the FVM. The Filecoin EVM runtime is completely compatible with any EVM development tools, such as Hardhat, Brownie, and MetaMask, making deploying and interacting with EVM-based actors easy! This is because Filecoin nodes offer the Ethereum JSON-RPC API.
For a deeper dive into the concepts discussed on this page, see this presentation Ethereum compatibility of FVM, see:
The FVM project has come a long way in an incredibly short amount of time. This is the roadmap for FVM features for the Filecoin network.
The goal of the FVM project is to add general programmability to the Filecoin blockchain. Doing so will give developers all kinds of creative options, including:
Orchestrating storage.
Creating L2 networks on top of the Filecoin blockchain.
Providing new incentive structures for providers and users.
Frequently verifying that providers are storing data correctly.
Automatically finding which storage providers are storing what data.
Many more data-based applications.
Filecoin was the first network deploying programmability, post-genesis, to ensure that layer 0 of the Filecoin blockchain was stable and fully functional. Due to the large amounts of capital already secured within the Filecoin network, the development of the FVM needs to be careful and gradual.
The FVM roadmap is split into three initiatives:
Milestone 1: Initialize the project and allow built-in actors to run on the FVM.
Milestone 2: Enable the deployment of Ethereum virtual machine (EVM) compatible smart contracts onto the FVM. Also, allow developers to create and deploy their own native actors to the FVM.
Milestone 3: Continue to enhance programmability on FVM.
✅ Lotus mainnet canaries with FVM support
Completed in February 2022
The reference FVM implementation has been integrated into a fork of Lotus (the Filecoin reference client). A fleet of canary nodes have been launched on mainnet, running WASM-compiled built-in actors on the FVM. The canaries are monitored for consensus faults and to gather telemetry. This milestone is a testing milestone that’s critical to collect raw execution data to feed into the overhaul of the gas model, in preparation for user-programmability. It implies no network upgrade.
✅ Ability to run FVM node and sync mainnet
Completed in March 2022
Any node operator can sync the Filecoin Mainnet using the FVM and Rust built-in actors, integrated in Lotus, Venus, Forest, and Fuhon implementations. It implies no network upgrade.
✅ Introduction of non-programmable WASM-based FVM
Completed in May 2022
Mainnet will atomically switch from the current legacy virtual machines to the WASM-based reference FVM. A new gas model will be activated that accounts for actual WASM execution costs. Only Rust built-in actors will be supported at this time. This milestone requires a network upgrade.
✅ Network Version 17 (nv17): Initial protocol refactors for programmability
Completed in November 2022
An initial set of protocol refactors targeting built-in actors, including the ability to introduce new storage markets via user-defined smart contracts.
✅ Ability to deploy EVM contracts to mainnet (FEVM)
Completed in March 2023
The Filecoin network will become user-programmable for the first time. Developers will be able to deploy smart contracts written in Solidity or Yul, and compiled to EVM. Smart contracts will be able to access Filecoin functionality by invoking built-in actors. Existing Ethereum tooling will be compatible with Filecoin. This milestone requires a network upgrade.
✅ Hyperspace testnet goes live
Completed on January 16th 2023
A new stable developer testnet called Hyperspace will be launched as the pre-production testnet. The community is invited to participate in heavy functional, technical, and security testing. Incentives and bounties will be available for developers and security researchers.
✅ FEVM goes live on mainnet
Completed on March 14th 2023
🔄 Ability to deploy Wasm actors to mainnet
To complete midway through 2023
Developers will be able to deploy custom smart contracts written in Rust, AssemblyScript, or Go, and compiled to WASM bytecode. SDKs, tutorials, and other developer materials will be generally available. This milestone requires a network upgrade.
🔮 Further incremental protocol refactors to enhance programmability
To complete in 2023
A series of additional incremental protocol upgrades (besides nv17) to move system functionality from privileged space to user space. The result will be a lighter and less opinionated base Filecoin protocol, where storage markets, deal-making, incentives, etc. are extensible, modular, and highly customizable through user-deployed actors. Enhanced programming features such as user-provided cron, asynchronous call patterns, and more will start to be developed at this stage.
This section explains what the Filecoin EVM-runtime (FEVM) is, and how developers can use it to interact with the Filecoin network.
A list of frequent asked questions about FVM, FEVM and how to build on Filecoin network.
FVM allows us to think about data stored on Filecoin differently. Apps can now build a new layer on the Filecoin network to enable trading, lending, data derivatives, and decentralized organizations built around datasets.
FVM can create incentives to solve problems that Filecoin participants face today around data replication, data aggregation, and liquidity for miners. Beyond these, there is a long tail of data storage and retrieval problems that will also be resolved by user programmability on top of Filecoin.
The FVM operates on blockchain state data — it does not operate on data stored in the Filecoin network. This is because access to that data depends on network requests, an unsealed copy’s availability, and the SPs’ availability to supply that data.
Access and manipulation of data stored in the network will happen via L2 solutions, for example, retrieval networks or compute-over-data networks, e.g., Saturn or CoD.
Unlike other EVM chains, FEVM specifically allows you to write contracts that orchestrate programmable storage. This means contracts that can coordinate storage providers, data health, perpetual storage mechanisms, and more. Other EVM chains do not have direct access to Filecoin blockchain state data.
An actor is code that the Filecoin virtual machine can run. Actors are also referred to as smart contracts.
Having storage contracts as a native primitive open to smart contract developers. Reduce costs of writing to storage from an EVM smart contract to a separate storage service.
FEVM allows Solidity developers to easily write/port actors to the FVM using the tools that have already been introduced in the Ethereum ecosystem.
Applications that natively make use of storage contracts. Perpetual storage contracts, Data DAOs, etc.
Perpetual storage is a unique actor design paradigm only available on the FVM that allows users the ability to renew Filecoin storage deals and to keep them active indefinitely. This could be achieved by using a Decentralized Autonomous Organization (DAO) structure for example.
Data DAOs are a unique design paradigm FVM developers could create which use Filecoin storage to store all their data instead of a service like AWS (which is currently used).
Yes.
api.hyperspace.node.glif.io/rpc/v1
api.zondax.ch/fil/node/hyperspace/rpc/v1
No, the FEVM is its own instance of the EVM built on top of Filecoin. You will need to redeploy smart contracts that exist in the EVM to the FEVM. Bridges can be built to top of the FEVM which connect it to other blockchains however.
When an EVM is deployed to FEVM, it is compiled with WASM and an actor instance is created in FEVM that runs the EVM bytecode. The user-defined FEVM actor is then able to interact with the Filecoin network via built-in actors like the Market and Miner APIs.
No, it must be deployed to the FEVM.
React, Ethers.js, web.js, ReactJS work well.
0x
address, to the underlying Filecoin f
address?Store a number limit on running DealClient
and publish_deal
and have it authorized to replicate.
The intent of FEVM/FVM is to compute over state data (the metadata of your stored data). Storage providers are the ones that are able to store your data and upload the deal to the Filecoin network. Data retrieval happens via Retrieval Providers, accepting the client’s ask to retrieve and working with storage providers to decrypt the data to deliver to the client. FEVM/FVM is able to build logic around these 2 processes and automate, add verification and proofs, time-lock retrievals etc.
In the Filecoin network, an address is a unique identifier that refers to an actor in the Filecoin state. All actors in Filecoin have a corresponding address which varies from the different usages.
The Filecoin EVM runtime introduces three new actor types:
A placeholder is a particular type of pseudo-actor that holds funds until an actual actor is deployed at a specific address. When funds are sent to an address starting with f410f
that doesn’t belong to any existing actor, a placeholder is created to hold the said funds until either an account or smart contract is deployed to that address.
A placeholder can become a real actor in one of two ways:
A message is sent from the account that would exist at that placeholder’s address. If this happens, the placeholder is automatically upgraded into an account.
An EVM smart contract is deployed to the address.
An Ethereum-style account is the Filecoin EVM runtime equivalent of an account with an f1
or f3
address, also known as native accounts. However, there are a few key differences:
These accounts have 0x
-style addresses and an equivalent f
-style address starting with f410f
.
Messages from these accounts can be sent with Ethereum wallets like MetaMask by connecting the wallet to a Filecoin client.
These accounts can be used to transfer funds to native or Ethereum-style.
They can be used to call EVM smart contracts and can be used to deploy EVM smart contracts. However, they cannot be used to call native actors such as multisig or miner actors.
An EVM smart contract actor hosts a single EVM smart contract. Every EVM smart contract will have a 0x
-style address.
An EVM smart contract can be deployed in one of three ways:
An existing EVM smart contract can use the EVM’s CREATE
/CREATE2
opcode.
A native account can call method 4
on the Ethereum account manager f010
, passing the EVM init code as a CBOR-encoded byte-string (major type 2) in the message parameters.
An EVM smart contract may be called in one of three ways:
An EVM smart contract can use the EVM’s CALL
opcode.
Finally, a native account can call method 3844450837
(FRC42(InvokeEVM)
):
The input data should either be empty or encoded as a CBOR byte string.
The return data will either be empty or encoded as a CBOR byte string.
In this quickstart tutorial we’ll walk through how to deploy your first smart-contract to the Filecoin network.
We’re going to install a browser-based wallet called MetaMask, create a new wallet address, supply some test currency to that wallet, and then use a browser-based development environment called Remix to deploy a smart contract to the Filecoin network. We’re going to be creating an ERC-20 token in this quickstart. The ERC-20 contract is used a lot in representing a massive array of tokens across multiple blockchains, primarily the Ethereum blockchain.
We’re going to be using MetaMask, a cryptocurrency wallet that lives in your browser making it very easy for users to interact with web3-based sites!
Before we can interact with the Filecoin network, we need funds. But before we can get any funds, we need somewhere to put them!
Install the wallet by clicking the Download for button. MetaMask is available for Brave, Chrome, Edge, Firefox, and Opera.
Once you have installed MetaMask, it will open a Get started window.
Click Create a new wallet.
Enter a password to secure your MetaMask wallet. You will need to enter this password every time you use the wallet.
Follow the prompts until you get to the Secret Recovery Phrase window. Read the information about what this recovery phrase is on this page.
Eventually you should get to the Wallet creation success page!
Once you’ve done that, you should have your account set up!
Enable the Testnets toggle and enter Filecoin
into the search bar.
Scroll down to find the Filecoin – Calibration testnet.
In MetaMask click Next.
Click Connect.
Click Approve when prompted to Allow this site to add a network.
Click Switch network when prompted by MetaMask.
Open MetaMask from the browser extensions tab:
You should see the Filecoin Calibration testnet listed at the top.
Nice! Now we’ve got the Filecoin Calibration testnet set up within MetaMask. You’ll notice that our MetaMask window shows 0 TFIL
. Test-filecoin (TFIL
) is FIL
that has no value in the real world, and developers use it for testing. We’ll grab some TFIL
next.
In your browser, open MetaMask and copy your address to your clipboard:
Paste your address into the address field, and click Send Funds.
That’s all there is to it! Getting tFil
is easy!
In Remix, workspaces are where you can create a contract, or group of contracts, for each project. Let’s create a new workspace to create our new ERC-20 token.
Open the dropdown menu and click create a new workspace.
In the Choose a template dropdown, select ERC20.
Under Customize template > Features, check the Mintable box.
Enter a fun name for your token in the Workspace name field. Something like CorgiCoin
works fine.
Click OK to create your new workspace.
The contract template we’re using is pretty simple. We just need to modify a couple of variables.
Click the compiler icon to open the compiler panel. Update the compiler version by selecting 0.8.20
from the compiler dropdown.
Under the contract directory, click MyToken.sol.
In the editor panel, replace MyToken
with whatever you’d like to name your token. In this example, we’ll use CorgiCoin
.
On the same line, replace the second string with whatever you want the symbol of your token to be. In this example, we’ll use CRG
.
That’s all we need to change within this contract. You can see on line 4 that this contract is importing another contract from @openzeppelin
for us, meaning that we can keep our custom token contract simple.
Click the green play symbol at the top of the workspace to compile your contract. You can also press CMD
+ s
on MacOS or CTRL
+ s
on Linux and Windows.
Remix automatically fetches the three import
contracts from the top of our .sol
contract. You can see these imported contracts under the .deps
directory. You can browse the contracts there, but Remix will not save any changes you make.
Now that we’ve successfully compiled our contract, we need to deploy it somewhere! This is where our previous MetaMask setup comes into play.
Click the Deploy tab from the left.
Under the Environment dropdown, select Injected Provider - MetaMask.
MetaMask will open a new window confirming that you want to connect your account to Remix.
Click Next:
Click Connect to connect your tFIL
account to Remix.
Back in Remix, under the Account field, you’ll see that it says something like 0x11F... (5 ether)
. This value is 5 tFIL
, but Remix doesn’t support the Filecoin network so doesn’t understand what tFIL
is. This isn’t a problem, it’s just a little quirk of using Remix.
Under the Contract dropdown, ensure the contract you created is selected.
Gather your MetaMask account address and populate the deploy field in Remix.
Click Deploy.
MetaMask will open a window and as you to confirm the transaction. Scroll down and click Confirm to have MetaMask deploy the contract.
Back in Remix, a message at the bottom of the screen shows that the creation of your token is pending.
Wait around 90 seconds for the deployment to complete.
On the Filecoin network, a new set of blocks, also called a tipset, is created every thirty seconds. When deploying a contract, the transaction needs to be received by the network, and then the network needs to confirm the contract. This process takes around one to two tipsets to process – or around 60 to 90 seconds.
Now that we’ve compiled and deployed the contract, it’s time to actually interact with it!
Let’s call a method within the deployed contract to mint some tokens.
Back in Remix, open the Deployed Contracts dropdown, within the Deploy sidebar tab.
Expand the mint
method. You must fill in two fields here: to
and amount
.
The to
field specifies where address you want these initial tokens sent to. Open MetaMask, copy your address, and paste it into this field.
This field expects an attoFil
value. 1 FIL
is equal to 1,000,000,000,000,000,000 attoFil
. So if you wanted to mint 100 FIL
, you would enter 100
followed by 18 zeros: 100000000000000000000
.
Click Transact.
MetaMask will open a window and ask you to confirm the transaction:
Again, you must wait for the network to process the transaction, which should take about 90 seconds. You can move on to the next section while you’re waiting.
Currently, MetaMask has no idea what our token is or what it even does. We can fix this by explicitly telling MetaMask the address of our contract.
Go back to Remix and open the Deploy sidebar tab.
Under Deployed Contracts, you should see your contract address at the top. Click the copy icon to copy the address to your clipboard:
Open MetaMask, select Assets, and click Import your tokens:
In the Token contract address field, paste the contract address you just copied from Remix and then click Add custom token. MetaMask should autofill the rest of the information based on what it can find from the Filecoin network.
Click Import token:
You should now be able to see that you have 100 of your tokens within your MetaMask wallet!
The Filecoin EVM runtime is deployed on Filecoin mainnet via the .
Like many other distributed teams, the FVM team works mostly on Slack. You can join the Filecoin Project Slack for free by going to . The FVM team hangs out in the following channels:
for building solutions on FVM and Filecoin
for development of the FVM
for FVM documentation
If you just need a general pointer or looking for technical FAQs, you can head over to the .
The connects grant makers with builders and researchers in the Filecoin community. Whether you represent a foundation that wants to move the space forward, a company looking to accelerate development on the features your application needs, or a developer team itching to hack on the FVM,
Here’s a collection of general FAQs that the team has gathered. If you are looking for more technical FAQs, please head to .
The (Filecoin virtual machine) enables developers to write and deploy custom code to run on top of the Filecoin blockchain. This means developers can create apps, markets, and organizations built around data stored on Filecoin.
is a Move-based L1 chain, whereas FVM is a WASM runtime on the Filecoin chain. The latter comes with an EVM right of the box; the former does not. The FVM also supports programmable storage with deals on Filecoin.
are code that come precompiled into the Filecoin clients and can be run using the FVM. They are similar to .
Not necessarily. You can use any of the public RPC nodes on either or the [Calibration testnet](/networks/calibration/details/
They are synergistic. Compute over data solutions such as can use the FVM.
Many already compile to WASM so developers can pick their favorite.
You can use the npm package or the has the constructor that calls mock_generate_deals();
.
It’s not impossible but storage providers are incentivized not to close the storage deal as they are slashed for not providing . Someone has to pay for the broken promise a miner makes to the chain and you need a custom market actor for it most likely to make the deal. You need to make deals for a certain amount of time - right now the boundaries are 6-18 months. You cannot ask a storage provider to take down your data without contacting them off-chain.
.
, also called EthAccount
.
.
Ethereum-native tooling can be used in conjunction with an Ethereum-style account such as or .
Ethereum-native tooling, like , can be used in conjunction with an Ethereum-style account.
If you’re an Ethereum developer, check out the .
Open your browser and visit the .
You may notice that we are currently connected to the Ethereum Mainnet. We need to point MetaMask to the Filecoin network, specifically the . We’ll use a website called to give MetaMask the information it needs quickly.
Go to .
Go to and click Send Funds.
The faucet will show a transaction ID. You can copy this ID into a Calibration testnet to view your transaction. After a couple of minutes, you should see some tFIL
transferred to your address.
The development environment we’re going to be using is called Remix, viewable at . Remix is an incredibly sophisticated tool, and there’s a lot you can play around with! In this tutorial however, we’re going to stick to the very basics. If you want to learn more, check out .
Open .
Having a bunch of tokens in your personal MetaMask is nice, but why not send some tokens to a friend? Your friend needs to create a wallet in MetaMask as we did in the and sections. They will also need to import your contract deployment address like you did in the section. Remember, you need to pay gas for every transaction that you make! If your friend tries to send some of your tokens to someone else but can’t, it might be because they don’t have any tFil
.
Instead of assigning a fixed gas cost in each instruction, the Filecoin EVM runtime charges FIL gas based on the WASM code execution of the Filecoin EVM runtime interpreter.
When executing a message that invokes an EVM contract, the Filecoin virtual machine charges for the message chain inclusion (when the message originates off-chain) and then invokes the actor that hosts the contract. The actor is an instance of the EVM actor, which uses the Filecoin EVM runtime interpreter to execute the contract.
The FEVM interpreter must first load its state, including the contract state, which costs additional gas. The interpreter then begins the execution of the contract bytecode. Each opcode interpreted may perform computation, syscalls, state i/o, and send new messages, all of which are charged with FIL gas. Finally, if the contract state is modified, the interpreter must flush it to the blockstore, which costs additional gas.
Generally, it is not possible to compute gas costs for a contract invocation without using gas estimation through speculative execution.
The total gas fee of a message is calculated as the following:
Take a look at the Gas usage section of the How Filecoin works page for more information on the various gas-related parameters attached to each message.
Let’s take a transaction as an example. Our gas parameters are:
GasUsage
= 1000
attoFIL
BaseFee
= 20
attoFIL
Gas limit
= 2000
attoFIL
Gas premium
= 5
attoFIL
The total fee is (GasUsage × BaseFee) + (Gaslimit x GasPremium)
:
Additionally, the message sender can also set the GasFeeCap
parameter they are willing to pay. If the sender sets the GasLimit
too high, the network will compute the amount of gas to be refunded and the amount of gas to be burned as OverEstimationBurn
.
Filecoin nodes, such as Lotus, have several JSON-API API endpoints designed to help developers estimate gas usage. The available JSON-RPC APIs are:
GasEstimateMessageGas
: estimate gas values for a message without any gas fields set, including GasLimit, GasPremium, and GasFeeCap. Returns a message object with those gas fields set.
GasEstimateGasLimit
takes the input message and estimates the GasLimit
based on the execution cost as well as a transaction multiplier.
GasEstimateGasPremium
: estimates what GasPremium
price you should set to ensure a message will be included in N
epochs. The smaller N
is the larger GasPremium
is likely to be.
GasEstimateFeeCap
: estimate the GasFeeCap
according to BaseFee
in the parent blocks.
If you want to learn more about how to use those JSON-RPC APIs for the Filecoin gas model, please check the JSON RPC API docs for Gas.
Gas estimation varies from network to network. For example, the BaseFee
on mainnet is different from the BaseFee
on the Calibration testnet.
If you’d rather not calculate and estimate gas for every message, you can just leave the optional fields unset. The gas fields will be estimated and set when the message is pushed to the mempool.
Since Filecoin is fully EVM-compatible, Filecoin nodes also provide Ethereum-compatible APIs to support gas estimation:
EthEstimateGas: generates and returns an estimate of how much gas is necessary to allow the transaction to complete.
EthMaxPriorityFeePerGas: returns a fee per gas that is an estimate of how much you can pay as a priority fee, or “tip”, to get a transaction included in the current block.
To request the current max priority fee in the network, you can send a request to a public Filecoin endpoint:
This will output something like:
You can convert the result
field from hexadecimal to base 10 in your terminal. Take the result
output and remove the 0x
from the start. Then use echo
to output the conversion:
While Filecoin EVM runtime aims to be compatible with the Ethereum ecosystem, it has some marked differences.
Filecoin charges Filecoin gas only. This includes the Filecoin EVM runtime. Instead of the Filecoin EVM runtime charging gas according to the EVM spec for each EVM opcode executed, the Filecoin virtual machine (FVM) charges Filecoin gas for executing the EVM interpreter itself. The How gas works page goes into this in more detail. Importantly, this means that Filecoin EVM runtime gas costs and EVM gas costs will be very different:
EVM and Filecoin gas are different units of measurement and are not 1:1. Purely based on chain throughput (gas/second), the ratio of Ethereum gas to Filecoin gas is about 1:444. Expect Filecoin gas numbers to look much larger than those in Ethereum.
Because Filecoin charges Filecoin gas for executing the Filecoin EVM runtime interpreter:
Some instructions may be more expensive and/or cheaper in Filecoin EVM runtime than they are in the EVM.
EVM instruction costs can depend on the exact Filecoin EVM runtime code-paths taken, and caching.
Filecoin gas costs are not set in stone and should never be hard-coded. Future network upgrades will break any smart contracts that depend on gas costs not changing.
Solidity calls address.transfer
and address.send
to grant a fixed gas stipend of 2300 Ethereum gas to the called contract. The Filecoin EVM runtime automatically detects such calls, and sets the gas limit to 10 million Filecoin gas. This is a relatively more generous limit than Ethereum’s, but it’s future-proof. You should expect the address called to be able to carry out more work than in Ethereum.
Filecoin EVM runtime emulates EVM self-destruct behavior but isn’t able to entirely duplicate it:
There is no gas refund for self-destruct.
On self-destruct, the contract is marked as self-destructed, but is not actually deleted from the Filecoin state-tree. Instead, it simply behaves as if it does not exist. It acts like an empty contract.
Unlike in the EVM, in Filecoin EVM runtime, self-destruct can fail causing the executing contract to revert. Specifically, this can happen if the specified beneficiary address is an embedded ID address and no actor exists with the specified ID.
If funds are sent to a self-destructed contract after it self-destructs but before the end of the transaction, those funds remain with the self-destructed contract. In Ethereum, these funds would vanish after the transaction finishes executing.
The CALLCODE
opcode has not been implemented. Use the newer DELEGATECALL
opcode.
In Ethereum, SELFDESTRUCT
is the only way to send funds to a smart contract without giving the target smart contract a chance to execute code.
In Filecoin, any actor can use method 0
, also called a bare-value send, to transfer funds to any other actor without invoking the target actor’s code. You can think of this behavior as having the suggested PAY
opcode already implemented in Filecoin. However by default, Solidity smart contracts do not accept bare value transfers, unless the author implements the receive() or fallback() function. For more information see FIP Discussion #592.
Therefore in case the recipient is a smart contract, it is recommended to always use the InvokeEVM
method 3844450837
for sends to prevent loss of funds when sending to an f410f
/0x
address recipient.
The Filecoin EVM runtime, unlike Ethereum, does not usually enforce gas limits when calling precompiles. This means that it isn’t possible to prevent a precompile from consuming all remaining gas. The call actor
and call actor id
precompiles are the exception. However, they apply the passed gas limit to the actor call, not the entire precompile operation (i.e., the full precompile execution end-to-end can use more gas than specified, it’s only the final send
to the target actor that will be limited).
In Filecoin, contracts generally have multiple addresses. Two of these address types, f0
and f410f
, can be converted to 0x-style (Ethereum) addresses which can be used in the CALL
opcode. See Converting to a 0x-style address for details on how these addresses are derived.
Importantly, this means that any contract can be called by either its “normal” EVM address (corresponding to the contract’s f410f
address) or its “masked ID address” (corresponding from the contract’s f0
address).
However, the addresses returned by the CALLER, ORIGIN, and ADDRESS instructions will always be the same for the same contract.
The ADDRESS will always be derived from the executing contract’s f410f
address, even if the contract was called via a masked ID address.
The CALLER/ORIGIN will be derived from the caller/origin’s f410f
address, if the caller/origin is an Ethereum-style account or an EVM smart contract. Otherwise, the caller/origin’s “masked ID address” (derived from their f0
address) will be used.
When calling an Ethereum method that allows the user to ask for the latest
block, Filecoin will return the chain head
- 1
block. This behavior was implemented for compatibility with the deferred execution mode that Filecoin uses. In this mode, messages submitted at a given height
are only processed at height
+ 1
. This means that receipts for a block produced at height
are only available at height
+ 1
.
The FilForwarder is a smart contract that lets users transfer FIL from an Ethereum-based f4 address to a Filecoin address of a different type.
Filecoin has multiple address spaces: f0
, f1
, f2
, f3
, and f4
. Each address space fits a particular need for the Filecoin network. The f410
address spaces allow Ethereum addresses to be integrated into the Filecoin network.
Users interacting with the Filecoin EVM runtime need to use f4
addresses, masked to the Ethereum-style 0x
address. These addresses can be created from wallets like MetaMask, Coinbase wallet, or any other EVM-based wallet that allows for custom networks. There are use cases where a user with FIL in an 0x
-style address would want to send FIL to an f1
, f2
, or f3
address. For example, taking FIL out of a smart contract and sending it to a multi-sig account or an exchange.
This is where the problem lies. Ethereum-based wallets do not recognize the f1
, f2
, or f3
address formats, making it impossible to send FIL from an Ethereum-style address.
The FilForwarder exposes a smart contract method called forward
that takes a byte-level definition of a protocol address in an f-style and a message value. It then uses the internal Filecoin APIs exposed using the Filecoin EVM runtime to properly send FIL funds reliably and as cheaply as possible. This also has the side effect of creating the actor ID should the address receiving address be considered new. In this way, using FilForwarder from an Ethereum wallet to any other Filecoin address space is safe and reliable.
You can use the FilForwarder contract in two ways:
Using the Glif.io browser wallet
Manually invoking the contract
Before we start, make sure you know the address you’d like to forward your FIL to. You’ll need to ensure that the f410
Ethereum-style address has enough FIL to cover the transaction costs.
Go to Glif.io.
Select the network you want to use from the dropdown and click Connect Wallet.
In this example, we’re using the (now deprecated) Hyperspace testnet.
Confirm that you want to connect your wallet to Glif.io. You will only be prompted to do this once.
Click Close on the connection confirmation screen.
Select your wallet address from the dropdown and click Forward FIL.
Enter the destination address for your FIL, along with the amount of FIL you want to send:
Double-check that your destination address is correct and click Send.
You can check the transaction by clicking the transaction ID.
Your funds should be available at the destination after around two minutes. You can check that your funds have arrived by searching for the destination address in a block explorer.
If you can’t see your funds, make sure you’re viewing the correct network.
It generally takes around two minutes for a transaction to complete and for the funds to be available at the destination.
The FilForwarder contract can be interacted with using standard Ethereum tooling like Hardhat or Remix. In this guide, we’re going to use Hardhat, but these steps can be easily replicated using the web-based IDE Remix.
This guide assumes you have the following installed:
A Filecoin address stored in MetaMask
First, we need to grab the FilForwarder kit and install the dependencies:
Clone the FilForwarder repository and install the dependencies:
Use Yarn to install the project's dependencies:
Create an environment variable for your private key.
Always be careful when dealing with your private key. Double-check that you’re not hardcoding it anywhere or committing it to source control like GitHub. Anyone with access to your private key has complete control over your funds.
The contract is deterministically deployed on all Filecoin networks at 0x2b3ef6906429b580b7b2080de5ca893bc282c225
. Any contract claiming to be a FilForwarder that does not reside at this address should not be trusted. Any dApp can connect to the wallet and use the ABI in this repository to call this method using any frontend. See the Glif section above for steps on using a GUI.
Inside this repository is a Hardhat task called forward
. This task will use the private key to send funds using the contract. This task uses the fil-forwarder-{CHAIN_ID}.json
file to determine the deployed contract address for a given network. These addresses should always be the same, but these files prevent you from having to specify it each time.
The forward
command uses the following syntax:
NETWORK
: The network you want to use. The options are mainnet
and calibration
.
DESTINATION_ADDRESS
: The address you want to send FIL to. This is a string, like t01024
or t3tejq3lb3szsq7spvttqohsfpsju2jof2dbive2qujgz2idqaj2etuolzgbmro3owsmpuebmoghwxgt6ricvq
.
AMOUNT
: The amount of FIL you want to send. The value 3.141
would be 3.141 FIL.
To send 9 FIL to a t3
address on the Calibration testnet, run:
To send 42.5 FIL to a t1
address on the Calibration testnet, run:
In the Filecoin network, an address is a unique identifier that refers to an actor in the Filecoin state. All actors in Filecoin have a corresponding address which varies from the different usages.
Filecoin has five address classes, and actors tend to have multiple addresses. Furthermore, each address class has its own rules for converting between binary and text.
The goal of using different types of addresses is to provide a robust address format that is scalable, easy to use, and reliable. These addresses encode information including:
Network prefix: indicates the network the actor belongs to.
Protocol indicator: identify the type and version of this address.
Payload: identify the actor according to the protocol.
Checksum: validate the address.
Filecoin addresses can be represented either as raw bytes or a string. Raw bytes format will always be used on-chain. An address can also be encoded to a string, including a checksum and network prefix. The string format will never appear on-chain and is only for human-readable purposes.
Filecoin address can be broken down like this:
f
/ t
1 byte: 0
/ 1
/ 2
/ 3
/ 4
n bytes
4 bytes
The network prefix is prepended to an address when encoding to a string. The network prefix indicates which network an address belongs to. Network prefixes never appear on-chain and are only used when encoding an address to a human-readable format.
f
- addresses on the Filecoin mainnet.
t
- addresses used on any Filecoin testnet.
The protocol indicator identifies the address type, which describes how a method should interpret the information in the payload
field of an address.
0
: An ID address.
1
: A wallet address generated from a secp256k public key.
2
: An actor address.
3
: A wallet address generated from BLS public key.
4
: A delegated address for user-defined foreign actors:
410
: Ethereum-compatible address space managed by the Ethereum address manager (EAM). Each 410 address is equivalent to an 0x address.
Each address type is described below.
All addresses have a short integer assigned to them by InitActor
sequentially, a unique actor that can create new actors. The integer that gets assigned is the ID of that actor. An ID address is an actor’s ID prefixed with the network identifier and the protocol indicator. Therefore, any address in the Filecoin network has a unique ID address assigned to it.
The mainnet burn account ID address is f099
and is structured as follows:
Addressed representing an actor deployed through the init actor in the Filecoin network. It provides a way to create robust addresses for actors not associated with a public key. They are generated by taking a sha256
hash of the output of the account creation.
Actor addresses are often referred to by their shorthand, 2
.
Addresses managed directly by users, like accounts, are derived from a public-private key pair. If you have access to a private key, you can sign messages sent from that wallet address. The public key is used to derive an address for the actor. Public key addresses are referred to as robust addresses as they do not depend on the Filecoin chain state.
Public key addresses allow devices, like hardware wallets, to derive a valid Filecoin address for your account using just the public key. The device doesn’t need to ask a remote node what your ID address is. Public key addresses provide a concise, safe, human-readable way to reference actors before the chain state is final. ID addresses are a space-efficient way to identify actors in the Filecoin chain state, where every byte matters.
Filecoin supports two types of public key addresses:
secp256k1 addresses that begin with the protocol indicator as 1
.
BLS addresses that begin with the protocol indicator as 3
.
t1iandfn6d...ddboqxbhoeva
- a testnet wallet address generated using secp256k1. t3vxj34sbdr3...road7cbygq
- a testnet wallet address generated using BLS.
Filecoin supports extensible, user-defined actor addresses through the 4
address class, introduced in Filecoin Improvement Proposal (FIP) 0048. The 4
address class provides the following benefits to the network:
Implement foreign addressing systems in Filecoin.
A predictable addressing scheme to support interactions with addresses that do not yet exist on-chain.
User-defined, programmable addressing systems without extensive changes and network upgrades.
For example, a testnet delegated address using the Ethereum Addressing System is structured as follows:
The address manager actor ID is the actor ID of the address manager actor, which creates new actors and assigns a 4
address to the new actor. This leverages the extensible feature of the f4
address class.
The new actor ID is the arbitrary actor ID chosen by that actor.
Currently, per FIP 0048, f4
addresses may only be assigned by and in association with specific, built-in actors called address managers. This restriction will likely be relaxed once users are able to deploy custom WebAssembly actors.
This address type plays an essential role in supporting the FEVM. It allows the Filecoin network to be able to recognize the foreign address and validate and execute the transactions sent and signed by the supported foreign addresses.
The supported foreign addresses can be cast as f4/t4
addresses, and vice-versa. But not with f1/t1
or f3/t3
addresses.
Ethereum Address Manager (EAM) is a built-in actor that manages the Ethereum address space, anchored at the 410
address namespace. It acts like an EVM smart contract factory, offering methods to create and assign the f410/t410
Filecoin address to Ethereum address.
The subaddress of an f410/t410
address is the original Ethereum address. Ethereum addresses can be cast as f410
addresses, and vice-versa. The f410/t410
address will be used for the Ethereum-compatible FVM (FEVM) development tools and applications built on FEVM.
Example
If you have an Ethereum wallet address starting with 0x
, then the Ethereum Address Manager (EAM) will assign a corresponding t410
Filecoin address to it. If you send 10 TFIL to 0xd388ab098ed3e84c0d808776440b48f685198498
using a wallet like MetaMask, you will receive 10 TFIL to your t410f2oekwcmo2pueydmaq53eic2i62crtbeyuzx2gmy
address on Filecoin Calibration testnet.
Again, assume you have deployed a solidity smart contract on Filecoin Calibration. Then you will receive a smart contract address starting with t410
. EAM will also assign a corresponding 0x
Ethereum address to it.
When you try to invoke this smart contract on Filecoin using Ethereum tooling, you need to use your 0x5f6044198a16279f87d2839c998893858bbf8d9c
smart contract address.
The Filecoin EVM runtime introduces support for 0x
Ethereum-style addresses. Filecoin addresses starting with either f0
or f410f
can be converted to the 0x
format as follows:
Addresses starting with f0
can be converted to the 0x
format by:
Extracting the actor_id
(e.g., the 1234
in f01234
).
Hex encode with a 0xff
prefix: sprintf("0xff0000000000000000000000%016x", actor_id)
.
Addresses starting with f410f
address can be converted to the 0x
format by:
Removing the f410f
prefix.
Decoding the remainder as base 32 (RFC 4648 without padding).
Trim off the last 4 bytes. This is a checksum that can optionally be verified, but that’s beyond the scope of this documentation.
Assert that the remaining address is 20 bytes long.
Hex-encode: sprintf(0x%040x", actor_id)
.
f0
addresses are not re-org stable and should not be used until the chain has settled.
On the flip side, Ethereum-style addresses can be converted to a Filecoin address as follows:
Addresses starting with 0xff0000000000000000000000
can be converted to a Filecoin address by:
Decoding the last 16 hex digits into a uint64
Format the address as f0${decimal(id)}
where decimal(id) is the decimal representation of the decoded actor ID.
Otherwise, it maps to f410f…
This section covers the very basics of storing data works on the Filecoin network.
This section is an introduction to two methods of performing storage deals --through the Filecoin Plus program or through various storage onramps. This section also explains the features and advantages of using Filecoin and IPFS.
Due to the nature of Filecoin and Ethereum having different address types in the Filecoin network, the process for transferring FIL between addresses can be a bit nuanced.
After FVM launched, a new Ethereum-compatible address type (f410
address) was introduced to the Filecoin network. This new f410
address can be converted into Ethereum-style addresses starting with 0x
so that it can be used in any Ethereum-compatible toolings or dApps. Filecoin addresses start with f
, so we will use the f
address in this tutorial. And Ethereum-style addresses start with 0x
, so we will use the 0x
address in this tutorial.
There are four paths for transferring FIL tokens across the Filecoin network, depending on which address type you are transferring from and to.
From an 0x
address
From a f
address
To an 0x
address
To a f
address
ASSETS ON THE FILECOIN NETWORK ARE NOT AVAILABLE ON ANY OTHER NETWORK Remember that Filecoin is fully compatible with Ethereum tools, like wallets. But that doesn’t mean you’re using the Ethereum network. These instructions transfer assets only within the Filecoin network. Learn how to configure your Ethereum wallet on the Filecoin network.
If you want to transfer FIL tokens from one f4
address to another f4
address using their corresponding 0x
addresses, you need to understand how to convert between f4
and 0x
addresses.
If you have f4
address, you can convert it to 0x
address using Beryx Address converter.
If you have a 0x
address, you can directly search it on Filfox Explorer, which will show the 0x
address and corresponding f4
address.
Apart from that, you just need to follow the standard process using your preferred Ethereum-compatible wallet, like MetaMask, MethWallet, etc. For instance, MetaMask has a simple guide for how to send Ethereum from one account to another.
If you want to transfer FIL tokens from an Ethereum style 0x
address to another Filecoin address type, like an f1
or f3
address, follow the steps in FilForwarder tutorial.
Most wallets and exchanges currently support Filecoin f1
or f3
addresses, and many of them already fully support f4
and 0x
addresses, including OKX, Kraken, Btcturk, etc. But there are some exchanges that are still implementing the support for f4
addresses. If your preferred wallets and exchanges don’t let you directly transfer FIL to an f4
or Ethereum-style 0x
address, We recommend filing a support issue with the exchange to help accelerate the support of f4
addresses.
The process for sending FIL from a Filecoin f
address to an Ethereum-style 0x
address depends on the wallet or exchange you use.
Ledger Live supports sending to a Filecoin f4
address, which has an automatic 0x
equivalent that you can look up on any block explorer. This allows you to directly transfer your FIL to an Ethereum-style 0x
address using its f4
equivalent.
Sending directly to a 0x
address does not work in Ledger Live. You must use the f4
equivalent.
A hot wallet is a cryptocurrency wallet that is always connected to the internet. They allow you to store, send, and receive tokens. Because hot wallets are always connected to the internet, they tend to be somewhat more vulnerable to hacks and theft than cold storage methods. However, they are generally easier to use than cold wallets and do not require any specific hardware like a Ledger device.
If you want to transfer your FIL tokens from the f1\f3
to the 0x
address, but the wallet or exchange you are using does not support the f4
and 0x
style addresses. Then, you can create a burner wallet using Glif, transfer FIL to the burner wallet, and then transfer FIL from the burner wallet to the 0x
address on MetaMask.
Navigate to glif.io/. Create a Burner wallet.
Click Create Seed Phase. Write down your seed phrase somewhere safe. You can also copy or download the seed phrase. You will need it later.
Click I’ve recorded my seed phrase. Using your seed phrase, enter the missing words in the blank text fields.
Click Next, and then Connect. The burner wallet is created
In the upper left corner of your wallet dashboard, click on the double squares icon next to your address to copy it. Record this address. You will need it later.
From your main wallet account or exchange, transfer your FIL token to this address.
Connect to MetaMask and copy your 0x
address.
Once the funds appear in the burner wallet, click on Send FIL.
Enter the necessary information into the text fields:
In the Recipient field, enter your 0x
style address. GLIF automatically converts it to an f4
address.
In the Amount field, enter the amount of FIL to send. Make sure you have enough FIL to cover the GAS cost.
Click Send. The FIL will arrive in your MetaMask wallet shortly.
If you are transferring FIL from any exchange to your 0x
address on MetaMask, make sure the exchange supports withdrawing FIL to the 0x
or f410
address. If not, you will need extra steps to withdraw FIL to your 0x
address. Let’s take Coinbase as an example; you can follow this Guide: How to transfer FIL from Coinbase to a Metamask Wallet (0x).
There are no special steps or requirements for sending Filecoin from one Filecoin-style address to another on the Filecoin network.
The goal of the Filecoin Plus program is to increase the amount of useful data stored with storage providers by clients on the Filecoin network.
In short, this is achieved by appointing allocators responsible for assigning DataCap tokens to clients that are vetted by the allocator as trusted parties storing useful data. Clients then pay DataCap to storage providers as part of a storage deal, which increases a storage provider’s probability of earning block rewards. A full description of this mechanism is described below.
Filecoin Plus creates demand on the Filecoin network, ensuring the datasets stored on the network are legitimate and useful to either the clients, or a third party.
Filecoin Plus introduces two concepts important to interactions on the Filecoin network – DataCap and Quality Adjusted Power (QAP).
DataCap is a token paid to storage providers as part of a deal in which the client and the data they are storing is verified by a Filecoin Plus allocator. Batches of DataCap are granted to allocators by root-key holders, allocators give DataCap to verified clients, and clients pay DataCap to storage providers as part of a deal. The more DataCap a storage provider ends up with, the higher probability they have to earn block rewards. The role of each of these participants, and how DataCap is used in a Filecoin Plus deal, is described below in the "Filecoin Plus Processes & Participants" section.
Quality Adjusted Power is an assigned rating to a given sector, the basic unit of storage on the Filecoin network. Quality Adjusted Power is a function of a number of features of the sector, including, but not limited to, the sector’s size and promised duration, and whether the sector includes a Filecoin+ deal. It's clear to the network that a sector includes a Filecoin Plus deal if a deal in that sector involves DataCap paid to the storage provider. The more Filecoin Plus verified data the storage provider has in a sector, the higher the Quality-Adjusted Power a storage provider has, which linearly increases the number of votes a miner has in the Secret Leader Election, determining which storage provider gets to serve as the verifier for the next block in the blockchain, and thus increasing the probability the storage provider is afforded the opportunity to earn block rewards. For more details on Quality Adjusted Power, see the Filecoin specification.
There is a common misconception that a Filecoin Plus deal increases the miner’s reward paid to a Filecoin storage provider by a factor of ten. This is not true, Filecoin+ does not increase the amount of block rewards available to storage providers. Including Filecoin Plus deals in a sector increases the Quality Adjusted Power of a storage provider, which increases the probability a storage provider is selected as the block verifier for the next block on the Filecoin blockchain, and thus increases the probability they earn block rewards.
Consider first a network with ten storage providers. Initially, each storage provider has an equal 10% probability of winning available block rewards in a given period:
In the above visualization, "VD" means "verified deals", that is, deals that have been reviewed by allocators and have associated spending of datacap.
If two of these storage providers begin filling their sectors with verified deals, their chances of winning a block reward increases by a factor of ten relative to their peers. Each one of these storage providers with verified deals in their sectors has a 36% chance of winning the block reward, while storage providers with only regular deals in their sectors have a 4% probability of winning the block rewards.
Incentives for storage providers to accept verified deals is strongest initially. As more and more storage providers include verified deals in their sectors, the probability any one of them earns the block rewards returns to an equal chance.
As seen in the diagrams above, Filecoin Plus increases the collateral requirements needed by a storage provider. As a higher percentage of storage providers include verified deals in their sectors, the collateral needed by each storage provider will increase. To learn more about storage provider collateral, see this link.
The participants of the Filecoin+ program, along with how they interact with each other, is detailed here:
Decisions as to who the root-key holders should be, how they should grant and remove batches of DataCap to/from allocators, and other important decisions about the Filecoin+ program are determined through Filecoin Improvement Proposals (FIPs), the community governance process. Learn more about Filecoin+ governance. To see a list of FIPs, see this link.
Root-key holders execute the governance process for Filecoin+ as determined through community executed Filecoin Improvement Proposals, their role is to grant and remove batches of DataCap to/from allocators. Root-key holders are signers to a multisig wallet on-chain –a majority of signers are needed for an allocator to be granted or removed.
Allocators perform due diligence on clients and the data they are storing, allocate DataCap to trusted clients, and facilitate predetermined dispute resolution processes. To learn more about how allocators are chosen and evaluated, see this blog.
Clients are participants in the Filecoin network who store data with a storage provider. A trusted client, as determined by an allocator who performs due diligence on the client and the data they are looking to store, will be given DataCap by the allocator. Clients offer to give this DataCap to a storage provider as part of a deal, which increases the “deal quality multiplier” of the deal, and in turn the likelihood a storage provider will accept the deal.
Storage providers who receive DataCap as part of a deal are able to use this DataCap to increase their “quality adjusted power” of the storage provider on the network by a factor of ten. As described above, this increases their probability of being selected as the verifier for a block, affording them the opportunity to earn block rewards.
A visualization of the interactions between parties involved in a Filecoin+ deal described above is shown below in Figure 1.
Clients can secure DataCap by making a request to an allocator. Each one of the allocators maintain their own applications for requesting DataCap.
One such allocator is Filecoin Incentive Design Labs (FIDL). They maintain a Github repository that includes an application where clients can make a request of FIDL for DataCap. Clients and builders looking to acquire DataCap may consider applying directly with FIDL, noting that all DataCap applications are transparent and open for public review on the issues page.
The steps a client should follow to acquire DataCap are as follows:
Create a Filecoin wallet.
Choose an allocator from the full list of active allocators or the active list of allocators who have verified public datasets.
Check that you satisfy the requirements of the allocator. In the case of uploading open source datasets with FIDL as the allocator, the client will need to demonstrate to FIDL that they can (1) satisfy a third party Know Your Customer(KYC) identity check, (2) provide the details of storage provider (entity, storage location) where the data is intended to be stored, and (3) demonstrate proof that the dataset can be actively retrieved. You can learn more about FIDL’s requirements and application process.
Submit an application for DataCap from an allocator. You can submit a request to FIDL via their Github application form or Google Form.
Use the DataCap in a storage deal.
For builders on the Calibration testnet who need testnet DataCap to test their applications, a faucet is available. The steps a builder should follow to acquire testnet DataCap are as follows:
Create a wallet on Filecoin Calibration testnet. For more information, see the Calibration docs or Github.
Grant the wallet address DataCap by using this faucet.
Smart contracts can acquire and use DataCap just like any regular client. To do so, simply enter the f410
address of the smart contract as the client address when making a request for DataCap.
It’s important to note that DataCap allocations are a one-time credit for a Filecoin address and cannot be transferred between smart contracts. If you need to redeploy the smart contract, you must request additional DataCap.
Once you have an address with DataCap, you can make deals using DataCap as a part of the payment. Because storage providers receive a deal quality multiplier for taking Filecoin+ deals, many storage providers offer special pricing and services to attract clients who use DataCap to make deals.
Learn more about Storage Deals.
By default, when you make a deal with an address with DataCap allocated, you will spend that DataCap when making the deal.
There are three resources you can use to check the current status of the Filecoin+ deals and participants:
The Filecoin Pulse dashboard includes visualizations of and tables for data about Filecoin+ deals on the Filecoin blockchain, organized by Allocators, Clients, and Storage Providers.
The Datacap Stats dashboard shows DataCap allocations, including the number of allocators, clients, and storage providers. You can also see number and size of deals.
The Starboard Dashboard includes network health data related to Filecoin+ verified deals.
Curious about how it all got started, or where we’re headed? Learn about the history, current state, and future trajectory of the Filecoin project here.
The Filecoin Community Roadmap is updated quarterly. It provides insight into the strategic development of the network and offers pathways for community members to learn more about ongoing work and connect directly with project teams.
Learn about the ongoing cryptography research and design efforts that underpin the Filecoin protocol on the Filecoin Research website. The CryptoLab at Protocol Labs also actively researches improvements.
The Filecoin community believes that our mission is best served in an environment that is friendly, safe, and accepting, and free from intimidation or harassment. To that end, we ask that everyone involved in Filecoin read and respect our code of conduct.
This page discusses what verified deals are, and how they can impact storage providers.
Filecoin aims to be a decentralized storage network for humanity’s essential information. To achieve this, it’s crucial to add valuable data to the network. Filecoin Plus is a social trust program encouraging storage providers to store data in verified deals. A deal becomes verified after the data owner (client) completes a verification process, where community allocators assess the client’s use of Filecoin to determine its relevance and value to the Filecoin mission: storing and preserving humanity’s vital data. Allocators conduct due diligence by questioning clients and building reasonable confidence in their trustworthiness and use case.
Notaries are responsible for allocating a resource called DataCap to clients with valuable storage use cases. DataCap is a non-exchangeable asset that is allocated by notaries to data clients. DataCap gets assigned to a wallet but cannot be sold or exchanged. The client can only spend the DataCap as part of making a verified deal with a storage provider. DataCap is a single use credit, and a client’s DataCap balance is deducted based on the size of the data stored in verified deals.
Storage providers are incentivized by the Filecoin network to store verified deals. A 10x quality adjustment multiplier is set at the protocol level for storage offered for verified deals. A 100 TiB dataset will account for 1 PiB of Quality-Adjusted-Power (QAP). This means the storage provider has a larger share of storage power on the Filecoin network and will be more likely to get elected for WinningPoSt (see Storage proving). The storage provider will earn 10x more block rewards for the same capacity made available to the network, if that capacity is storing verified deals.
When storing real customer data and not simply CC sectors, a whole new set of responsibilities arises. A storage provider must have the capacity to make deals, to be able to obtain a copy of the data, to prepare the data for the network, prove the data on-chain via sealing, and last but not least, have a means to offer retrieval of the data to the client when requested.
As a storage provider, you play a crucial role in the ecosystem. Unlike miners in other blockchains, storage providers must do more than offer disk space to the network. Whether onboarding new customers to the network, or storing copies data from other storage providers for clients seeking redundancy, providing storage can involve:
Business development.
Sales and marketing efforts.
Hiring additional personnel.
Networking.
Relationship building.
Acquiring data copies requires systems and infrastructure capable of ingesting large volumes of data, sometimes up to a PiB. This necessitates significant internet bandwidth, with a minimum of 10 GB. For instance, transferring 1 PiB of data takes approximately 240 hours on a 10 GB connection. However, many large storage providers use up to 100 GB internet connections. ```
Data preparation, which involves separating files and folders in CAR files, is time-consuming and requires expertise. You can delegate this task to a Data Preparer for a fee or assume the role yourself. Tools like Singularity simplify this process.
Once the data is sealed and you are proving your copies on-chain (i.e. on the blockchain), you will need to offer retrievals to your customer as well. This obviously requires network bandwidth once more, so you may need to charge for retrievals accordingly.
Tools and programs exist to support Filecoin Plus, but storage providers need to know how to operate this entire workflow. See Filecoin Plus Programs for more information on available programs. See Architecture for more information on the tooling and software components.
With great power, comes great responsibility, which also counts for storage power: rewards on Fil+ deals are 10x, but so are the penalties. Because a sector of 32 GiB counts for 320 GiB of storage power (10x), the rewards and the penalties are calculated on the QAP of 320 GiB. Filecoin Plus allows a storage provider to earn more block rewards on a verified deal, compared to a regular data deal. The 10x multiplier on storage power that comes with a verified deal, however, also requires 10x collateral from the storage provider.
If the storage provider is then not capable of keeping the data and systems online and fails to submit the daily required proofs (WindowPoSt) for that data, the penalties (slashing) are also 10x higher than over regular data deals or CC sectors. Larger storage power means larger block rewards, larger collateral and larger slashing. The stakes are high - after all, we’re storing humanity’s most important information with Filecoin.