Manifest File

... 2022-8-20 About 13 min

# Manifest File

The Manifest project.yaml file can be seen as an entry point of your project and it defines most of the details on how SubQuery will index and transform the chain data. It clearly indicates where we are indexing data from, and to what on chain events we are subscribing to.

Pick your preferred network below to see the detailed guide for each:

specVersion: 1.0.0
name: subquery-starter
version: 0.0.1
runner:
  node:
    name: '@subql/node'
    version: '*'
  query:
    name: '@subql/query'
    version: '*'
description: 'This project can be use as a starting point for developing your Polkadot based SubQuery project'
repository: https://github.com/subquery/subql-starter
schema:
  file: ./schema.graphql
network:
  chainId: '0x91b171bb158e2d3848fa23a9f1c25182fb8e20313b2c1eb49219da7a70ce90c3'
  # Must be a non-pruned archive node
  endpoint: wss://polkadot.api.onfinality.io/public-ws
  # Optionally provide the HTTP endpoint of a full chain dictionary to speed up processing
  dictionary: https://api.subquery.network/sq/subquery/polkadot-dictionary
dataSources:
  - kind: substrate/Runtime
    startBlock: 1 # Block to start indexing from
    mapping:
      file: ./dist/index.js
      handlers:
        - handler: handleBlock
          kind: substrate/BlockHandler
        - handler: handleEvent
          kind: substrate/EventHandler
          filter:
            module: balances
            method: Deposit
        - handler: handleCall
          kind: substrate/CallHandler
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
specVersion: 1.0.0
name: avalanche-subql-starter
version: 0.0.1
runner:
  node:
    name: '@subql/node-avalanche'
    version: latest
  query:
    name: '@subql/query'
    version: latest
description: 'This project can be use as a starting point for developing your Avalanche based SubQuery project'
repository: https://github.com/subquery/avalanche-subql-starter
schema:
  file: ./schema.graphql
network:
  chainId: mainnet
  type: avalanche
  chainName: C
  # Must be a non-pruned archive node
  endpoint: https://avalanche.api.onfinality.io/
  # Optionally provide the HTTP endpoint of a full chain dictionary to speed up processing
  dictionary: https://api.subquery.network/sq/subquery/avalanche-dictionary
dataSources:
  - kind: avalanche/Runtime
    startBlock: 1 # Block to start indexing from
    options:
      # Must be a key of assets
      abi: erc20
      ## Pangolin token https://snowtrace.io/token/0x60781c2586d68229fde47564546784ab3faca982
      address: "0x60781C2586D68229fde47564546784ab3fACA982"
    assets:
      erc20:
        file: "IPangolinERC20.json"
    mapping:
      file: ./dist/index.js
      handlers:
        - handler: handleBlock
          kind: avalanche/BlockHandler
        - handler: handleTransaction
          kind: avalanche/TransactionHandler
          filter:
            ## The function can either be the function fragment or signature
            # function: '0x095ea7b3'
            # function: '0x7ff36ab500000000000000000000000000000000000000000000000000000000'
            function: approve(address spender, uint256 rawAmount)
            ## from: "0x60781C2586D68229fde47564546784ab3fACA982"
        - handler: handleLog
          kind: avalanche/LogHandler
          filter:
            topics:
              ## Follows standard log filters https://docs.ethers.io/v5/concepts/events/
              - Transfer(address indexed from, address indexed to, uint256 amount)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
specVersion: 1.0.0
name: cosmos-juno-subql-starter
version: 0.0.1
runner:
  node:
    name: '@subql/node-cosmos'
    version: latest
  query:
    name: '@subql/query'
    version: latest
description: 'This project can be use as a starting point for developing your Cosmos (CosmWasm) based SubQuery project using an example from Juno'
repository: https://github.com/subquery/cosmos-subql-starter
schema:
  file: ./schema.graphql
network:
  chainId: juno-1
  # Must be a non-pruned archive node
  endpoint: https://rpc.juno-1.api.onfinality.io
  # Optionally provide the HTTP endpoint of a full chain dictionary to speed up processing
  dictionary: https://api.subquery.network/sq/subquery/cosmos-juno-1-dictionary
dataSources:
  - kind: cosmos/Runtime
    startBlock: 3062001 # first block on juno-1
    #chainTypes: # This is a beta feature that allows support for any Cosmos chain by importing the correct protobuf messages
    #  cosmos.slashing.v1beta1:
    #    file: "./proto/cosmos/slashing/v1beta1/tx.proto"
    #    messages:
    #     - "MsgUnjail"
    mapping:
      file: ./dist/index.js
      handlers:
        - handler: handleBlock
          kind: cosmos/BlockHandler
        - handler: handleTransaction
          kind: cosmos/TransactionHandler
        - handler: handleEvent
          kind: cosmos/EventHandler
          filter:
            type: execute
            messageFilter:
              type: "/cosmwasm.wasm.v1.MsgExecuteContract"
              # contractCall field can be specified here too
              #values: # A set of key/value pairs that are present in the message data
                #contract: "juno1v99ehkuetkpf0yxdry8ce92yeqaeaa7lyxr2aagkesrw67wcsn8qxpxay0"
        - handler: handleMessage
          kind: cosmos/MessageHandler
          filter:
            type: "/cosmwasm.wasm.v1.MsgExecuteContract"
            # Filter to only messages with the provide_liquidity function call
            #contractCall: "provide_liquidity" # The name of the contract function that was called
            #values: # A set of key/value pairs that are present in the message data
              #contract: "juno1v99ehkuetkpf0yxdry8ce92yeqaeaa7lyxr2aagkesrw67wcsn8qxpxay0"
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
specVersion: 1.0.0
name: algorand-subql-starter
version: 1.0.0
runner:
  node:
    name: '@subql/node-algorand'
    version: '>=1.0.0'
  query:
    name: '@subql/query'
    version: '*'
description: >-
  This project can be used as a starting point for developing your Algorand SubQuery project
repository: 'https://github.com/subquery/algorand-subql-starter'
schema:
  file: ./schema.graphql
network:
  chainId: "wGHE2Pwdvd7S12BL5FaOP20EGYesN73ktiC1qzkkit8="
  # Must be a non-pruned archive node
  endpoint: "https://algoindexer.algoexplorerapi.io"
  # Optionally provide the HTTP endpoint of a full chain dictionary to speed up processing
  dictionary: https://api.subquery.network/sq/subquery/Algorand-Dictionary
  # For the testnet use the following
  # chainId: "SGO1GKSzyE7IEPItTxCByw9x8FmnrCDexi9/cOUJOiI="
  # endpoint: "https://algoindexer.testnet.algoexplorerapi.io"
dataSources:
  - kind: algorand/Runtime
    startBlock: 50000 # Block to start indexing from
    mapping:
      file: ./dist/index.js
      handlers:
        - handler: handleBlock
          kind: algorand/BlockHandler
        - handler: handleTransaction
          kind: algorand/TransactionHandler
          filter:
            txType: pay
            sender: "ZW3ISEHZUHPO7OZGMKLKIIMKVICOUDRCERI454I3DB2BH52HGLSO67W754"
            receiver: "ZW3ISEHZUHPO7OZGMKLKIIMKVICOUDRCERI454I3DB2BH52HGLSO67W754"
        - handler: handleTransaction
          kind: algorand/TransactionHandler
          filter:
            txType: acfg
            applicationId: 1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
specVersion: 1.0.0
name: terra-subql-starter
version: 0.0.1
runner:
  node:
    name: '@subql/node-terra'
    version: latest
  query:
    name: '@subql/query'
    version: latest
description: 'This project can be use as a starting point for developing your Terra based SubQuery project'
repository: https://github.com/subquery/terra-subql-starter
schema:
  file: ./schema.graphql
network:
  chainId: columbus-5
  endpoint: https://terra-columbus-5.beta.api.onfinality.io
  # Optionally provide the HTTP endpoint of a full chain dictionary to speed up processing
  dictionary: https://api.subquery.network/sq/subquery/terra-columbus-5-dictionary
  # Strongly suggested to provide a mantlemint endpoint to speed up processing
  mantlemint: https://mantlemint.terra-columbus-5.beta.api.onfinality.io
dataSources:
  - kind: terra/Runtime
    startBlock: 4724001
    mapping:
      file: ./dist/index.js
      handlers:
        - handler: handleBlock
          kind: terra/BlockHandler
        - handler: handleTransaction
          kind: terra/TransactionHandler
        - handler: handleEvent
          kind: terra/EventHandler
          filter:
            type: transfer
            messageFilter:
              type: /terra.wasm.v1beta1.MsgExecuteContract
              values:
                contract: terra1j66jatn3k50hjtg2xemnjm8s7y8dws9xqa5y8w
        - handler: handleMessage
          kind: terra/MessageHandler
          filter:
            type: /terra.wasm.v1beta1.MsgExecuteContract
            values:
              contract: terra1j66jatn3k50hjtg2xemnjm8s7y8dws9xqa5y8w
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
specVersion: "0.2.0"
name: example-project # Provide the project name
version: 1.0.0  # Project version
description: '' # Description of your project
repository: 'https://github.com/subquery/subql-starter' # Git repository address of your project

schema:
  file: ./schema.graphql # The location of your GraphQL schema file

network:
  genesisHash: '0x91b171bb158e2d3848fa23a9f1c25182fb8e20313b2c1eb49219da7a70ce90c3' # Genesis hash of the network
  endpoint: 'wss://polkadot.api.onfinality.io/public-ws'
  # Optionally provide the HTTP endpoint of a full chain dictionary to speed up processing
  dictionary: 'https://api.subquery.network/sq/subquery/dictionary-polkadot'

dataSources:
  - kind: substrate/Runtime
    startBlock: 1 # This changes your indexing start block, set this higher to skip initial blocks with less data
    mapping:
      file: "./dist/index.js"
      handlers:
        - handler: handleBlock
          kind: substrate/BlockHandler
        - handler: handleEvent
          kind: substrate/EventHandler
          filter: #Filter is optional
            module: balances
            method: Deposit
        - handler: handleCall
          kind: substrate/CallHandler
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30

# Migrating to v1.0.0 upgrade

If you have a project with specVersion below v1.0.0 you can use subql migrate to quickly upgrade. See the CLI documentation for more information.

# Change Log for v1.0.0

Under runner:

  • Now that SubQuery supports multiple layer 1 networks, you must provide runner information for various services.
  • runner.node specify the node image that is used to run the current project [@subql/node or @subql/node-avalanche].
  • runner.query specify the query service image associate with the project database - use @subql/query.
  • version specifies the version of these service, they should follow the SEMVER (opens new window) rules and match a published version on our package repository (opens new window). Latest and Dev are not supported.

Under templates:

Template are introduced from manifest v0.2.1, it allows creating datasources dynamically from these templates. This is useful when you don't know certain specific details when creating your project. A good example of this is when you know a contract will be deployed at a later stage but you don't know what the address will be.

For a more detailed explanation head here.

# Change log for v0.2.0

Under network:

  • There is a new required genesisHash field which helps to identify the chain being used.
  • For v0.2.0 and above, you are able to reference an external chaintype file if you are referencing a custom chain.

Under dataSources:

  • Can directly link an index.js entry point for mapping handlers. By default this index.js will be generated from index.ts during the build process.
  • Data sources can now be either a regular runtime data source or custom data source.

# Migrating from v0.2.0 to v0.3.0 upgrade

If you have a project with specVersion v0.2.0, The only change is a new required chainId field which helps to identify the chain being used.

# CLI Options

subql migrate can be run in an existing project to migrate the project manifest to the latest version.

# Overview

# Top Level Spec

Field v1.0.0 v0.2.0 Description
specVersion String String The spec version of the manifest file
name String String Name of your project
version String String Version of your project
description String String Discription of your project
repository String String Git repository address of your project
schema Schema Spec Schema Spec The location of your GraphQL schema file
network Network Spec Network Spec Detail of the network to be indexed
dataSources DataSource Spec DataSource Spec The datasource to your project
templates Templates Spec Templates Spec Allows creating new datasources from this templates
runner Runner Spec Runner Spec Runner specs info

# Schema Spec

Field All manifest versions Description
file String The location of your GraphQL schema file

# Network Spec

If you start your project by using the subql init command, you'll generally receive a starter project with the correct network settings. If you are changing the target chain of an existing project, you'll need to edit the Network Spec section of this manifest.

The chainId or genesisHash is the network identifier of the blockchain. Examples in Terra include bombay-12, or columbus-12, Avalanche might be mainnet, Cosmos might be juno-1, and in Substrate and Algorand it is always the genesis hash of the network (hash of the first block). You can retrieve this easily by going to PolkadotJS (opens new window) and looking for the hash on block 0 (see the image below).

Genesis Hash

Additionally you will need to update the endpoint. This defines the wss endpoint of the blockchain to be indexed - this must be a full archive node. You can retrieve endpoints for all parachains for free from OnFinality (opens new window)

Field v1.0.0 v0.2.0 Description
chainId String x A network identifier for the blockchain (genesisHash in Substrate)
genesisHash String String The genesis hash of the network (from v1.0.0 this is an alias for chainId and not necessary)
endpoint String String Defines the wss or ws endpoint of the blockchain to be indexed - This must be a full archive node. You can retrieve endpoints for all parachains for free from OnFinality (opens new window)
port Number Number Optional port number on the endpoint to connect to
dictionary String String It is suggested to provide the HTTP endpoint of a full chain dictionary to speed up processing - read how a SubQuery Dictionary works.
chaintypes {file:String} {file:String} Path to chain types file, accept .json or .yaml format

# Runner Spec

Field v1.0.0 Description
node Runner node spec Describe the node service use for indexing
query Runner query spec Describe the query service

# Runner Node Spec

Field v1.0.0 Description
name String We currently support @subql/node and soon @subql/node-avalanche
version String Version of the indexer Node service, it must follow the SEMVER (opens new window) rules or latest, you can also find available versions in subquery SDK releases (opens new window)

# Runner Query Spec

Field All manifest versions Description
name String We currently support @subql/query
version String Version of the Query service, available versions can be found here (opens new window), it also must follow the SEMVER rules or latest.

# Datasource Spec

Defines the data that will be filtered and extracted and the location of the mapping function handler for the data transformation to be applied.

Field All manifest versions Description
kind
startBlock Integer This changes your indexing start block, set this higher to skip initial blocks with less data
mapping Mapping Spec

# Mapping Spec

Field All manifest versions Description
handlers & filters Default handlers and filters,
Custom handlers and filters
List all the mapping functions and their corresponding handler types, with additional mapping filters.

For custom runtimes mapping handlers please view Custom data sources

# Data Sources and Mapping

In this section, we will talk about the default Substrate runtime and its mapping. Here is an example:

# Mapping handlers and Filters

The following table explains filters supported by different handlers.

Your SubQuery project will be much more efficient when you only use event and call handlers with appropriate mapping filters.

Network Handler Supported filter
Substrate/Polkadot substrate/BlockHandler specVersion, modulo
Substrate/Polkadot substrate/EventHandler module,method
Substrate/Polkadot substrate/CallHandler module,method ,success
Cosmos cosmos/BlockHandler modulo
Cosmos cosmos/TransactionHandler No filters
Cosmos cosmos/MessageHandler type, values*
Cosmos cosmos/EventHandler type, messageFilter*
Terra terra/BlockHandler No filters
Terra terra/TransactionHandler No filters
Terra terra/MessageHandler type, values*
Terra terra/EventHandler type, messageFilter*
Avalanche avalanche/BlockHandler No filters
Avalanche avalanche/TransactionHandler function filters (either be the function fragment or signature), from (address), to (address)
Avalanche avalanche/LogHandler topics filters, and address
Algorand algorand/BlockHandler modulo
Algorand algorand/TransactionHandler txType,sender, receiver, applicationId, nonParticipant, assetId, newFreezeStatus address

Default runtime mapping filters are an extremely useful feature to decide what block, event, or extrinsic will trigger a mapping handler.

Only incoming data that satisfy the filter conditions will be processed by the mapping functions. Mapping filters are optional but are highly recommended as they significantly reduce the amount of data processed by your SubQuery project and will improve indexing performance.

# Example filter from Substrate callHandler
filter:
  module: balances
  method: Deposit
  success: true
1
2
3
4
5

For Cosmos and Terra message and event handlers, you can filter by all the keys that exist in the message type provided by the filter. If the call is /terra.wasm.v1beta1.MsgExecuteContract or /cosmwasm.wasm.v1.MsgExecuteContract then you can also specify the name of the called function. An example of this is below:

The specVersion filter specifies the spec version range for a Substrate block. The following examples describe how to set version ranges.

filter:
  specVersion: [23, 24]   # Index block with specVersion in between 23 and 24 (inclusive).
  specVersion: [100]      # Index block with specVersion greater than or equal 100.
  specVersion: [null, 23] # Index block with specVersion less than or equal 23.
1
2
3
4

The modulo filter allows handling every N blocks, which is useful if you want to group or calculate data at a set interval. The following example shows how to use this filter.

filter:
  modulo: 50 # Index every 50 blocks: 0, 50, 100, 150....
1
2

# Custom Substrate and Cosmos Chains

You can index data from custom Substrate and Cosmos chains by also including chain types in the manifest.

# Custom Substrate Chains

We support the additional types used by Substrate runtime modules, typesAlias, typesBundle, typesChain, and typesSpec are also supported.

In the example below, the network.chaintypes are pointing to a file that has all the custom types included, This is a standard chainspec file that declares the specific types supported by this blockchain in either .json, .yaml or .js format.

network:
  genesisHash: '0x91b171bb158e2d3848fa23a9f1c25182fb8e20313b2c1eb49219da7a70ce90c3'
  endpoint: 'ws://host.kittychain.io/public-ws'
  chaintypes:
    file: ./types.json # The relative filepath to where custom types are stored
1
2
3
4
5

To use typescript for your chain types file include it in the src folder (e.g. ./src/types.ts), run yarn build and then point to the generated js file located in the dist folder.

network:
  chaintypes:
    file: ./dist/types.js # Will be generated after yarn run build
1
2
3

Things to note about using the chain types file with extension .ts or .js:

Here is an example of a .ts chain types file:

import { typesBundleDeprecated } from "moonbeam-types-bundle";
export default { typesBundle: typesBundleDeprecated };
1
2

# Working Example

You can see the suggested method for connecting and retrieving custom chain types in SubQuery's Official Dictionary repository (opens new window). Here you will find a dictionary project for each network with all the chain types correct inserted.

For example, Acala publish an offical chain type definition to NPM (opens new window). This is imported in your project's package.json like so:

{
  ...
  "devDependencies": {
    ...
    "@acala-network/type-definitions": "latest"
  },
  ...
}
1
2
3
4
5
6
7
8

The under /src/chaintypes.ts we define a custom export for Acala's types bundle from this package.

import { typesBundleForPolkadot } from "@acala-network/type-definitions";

export default { typesBundle: typesBundleForPolkadot };
1
2
3

This is then exported in the package.json like so:

{
  ...
  "devDependencies": {
    ...
    "@acala-network/type-definitions": "latest"
  },
  "exports": {
    "chaintypes": "src/chaintypes.ts"
  }
  ...
}
1
2
3
4
5
6
7
8
9
10
11

Finally, in the project.yaml manifest, we can import this official types bundle as per standard:

network:
  genesisHash: "0xfc41b9bd8ef8fe53d58c7ea67c794c7ec9a73daf05e6d54b14ff6342c99ba64c"
  endpoint: wss://acala-polkadot.api.onfinality.io/public-ws
  chaintypes:
    file: ./dist/chaintypes.js
1
2
3
4
5

# Custom Cosmos Chains

Similar to with Substrate where chain types are loaded into network.chaintypes, we can load protobuf messages specific to cosmos chains. If most are just using Wasm this should be already included.

You can reference a chaintypes file for Cosmos like so:

    chainTypes: # This is a beta feature that allows support for any Cosmos chain by importing the correct protobuf messages
      cosmos.slashing.v1beta1:
        file: "./proto/cosmos/slashing/v1beta1/tx.proto"
        messages:
         - "MsgUnjail"
1
2
3
4
5

# Custom Data Sources

Custom Data Sources provide network specific functionality that makes dealing with data easier. They act as a middleware that can provide extra filtering and data transformation.

A good example of this is EVM support, having a custom data source processor for EVM means that you can filter at the EVM level (e.g. filter contract methods or logs) and data is transformed into structures farmiliar to the Ethereum ecosystem as well as parsing parameters with ABIs.

Custom Data Sources can be used with normal data sources.

Here is a list of supported custom datasources:

Kind Supported Handlers Filters Description
substrate/FrontierEvm substrate/FrontierEvmEvent, substrate/FrontierEvmCall See filters under each handlers Provides easy interaction with EVM transactions and events on the Frontier EVM (widely used across Polkadot including in Moonbeam and Astar networks)

# Validating

You can validate your project manifest by running subql validate. This will check that it has the correct structure, valid values where possible and provide useful feedback as to where any fixes should be made.

Last update: August 20, 2022 04:32