I wanted to raise a couple of thoughts regarding the Verra-side implementation of the tokenisation process. In particular addressing; "Is there a market need to provide for the reactivation of immobilized VCUs, as long as any related crypto instruments or tokens were not used for any other purpose and are destroyed as part of this reactivation?", and "what accounts constitute immobilization accounts and what transactions may be performed in them (e.g., retirements, reactivations if the environmental benefit of associated crypto instruments or tokens has not been used and such instruments or tokens are destroyed).", and "What infrastructure and processes do entities participating in the immobilization approach need from Verra".
Verra will need to consider what data they wish to have access to regarding tokenised credits when a 2-way bridge is possible (a one-way bridge is a much simpler proposition, but Verra makes it clear in the consultation document they are not really thinking along these lines, so I'm assuming 2-way is the default in the below). While all transactions are immutable and transparent on the blockchain the representation of the tokenisation process in Verra's database (assumingly some sort of SQL Server/Oracle/DB2 type normalised data model implementation) is probably where most of the potential work is to prevent the risk of "double-counting". While this isn't really Klima's concern, any issues Verra has with reporting on tokenised credits increases the risk of another halt or slowdown of further credit tokenisation, hence it is Klima's interest to help solve potential problems Verra might encounter at the intersection of the relational and blockchain worlds (not to mention the benefit of showing itself as a honest and eager partner). I think the issues Verra might have are primarily related to data structures and processes. Currently the carbon credit lifecycle probably looks something like this:
creation -> ([cancellation] or [vcm transfer] or [retirement]){0,1}
Where after a credit is created and "live" all that can really be done with it is to cancel it (e.g. if an event like the forest the credit relates to burns down), transfer it to another database (e.g. from Verra's repository to Gold Standard) or to retire it (with an end-user realising the underlying benefit). This can only happen once, if at all (hence the {0,1} notation).
The issue Verra has with marking tokenised credits as retired has been stated clearly in the consultation document, but really the problem isn't with tokenisation par se, but with an unclear representation of the lifecycle of a credit (and an inability to change this dynamically - as would be useful with the creation of the on-chain market). To my mind the work Klima and Toucan did had such an impact that the issue already present started being more visible to them (e.g. like in the graphics shared by, I think, 0xy, recently which show wallet addresses as two of the biggest buyers of VCM credits along with Shell, Delta Airlines, etc).
There are two problems here I think Verra would like resolved here: knowing when a change in the lifecycle of carbon credits occurs and always knowing who the ultimate end-buyer is (who is realising the benefit of the credit).
The lifecycle for a carbon credit in the future will probably look close to something like the below:
creation -> [tokenisation]{0,*} -> ([cancellation] or [vcm transfer] or [retirement]){0,1}
Effectively this means between creation and an end state there is now a new process that can be initiated, to optionally tokenise one or many times. That sub process looks like:
tokenisation -> [detokenisation]
i.e. once tokenised a credit can be optionally detokenised, and then that sub process can be repeated.
A diagram would show this a lot better (as some of the states can only be moved into under certain conditions), but from this comes three key considerations:
1). A 2-way bridge allows the same credit to be bridged multiple times (e.g. tokenised on the 7 August, detokenised on the 19 September, tokenised again on 11 October, etc)
2). The same credit could be bridged by different actors at different times (e.g. Toucan the first time, C3 the second, FlowCarbon the third, etc)
3). The same credit could be reported on by two on-chain actors' if that data is collected and provided to Verra at two different points in time (and on could state it as tokenised into their domain on the 11 October retired on say 15 October while the other reported it as "live" in their last data dump repository on 9th October)
The concern isn't that a credit could exist in two different landscapes at the same time, as basic logic on Verra's side (do not allow bridging of a tokenised credit) combined with the basic tenets of blockchain technology prevent this, but that the quality of reporting on the on-chain actors' data dumps (which Verra has stated it will require in some form - which makes sense as they are unlikely to want to understand the different architectures of each on-chain actor to bring that all together themselves, and hence will likely state a standard format for the on-chain market to provide data in); the concern is that different actors providing different snapshots of data at different times will lead to potential inconsistencies.
Again, this isn't Kilma's problem to solve but maybe raising it, and suggesting the solution, might win buy-in from Verra and increase the likelihood of a strong partnership (even though we know there is not chance of double counting, providing a way for Verra to verify this without understanding the details of blockchains can only improve confidence for them to move more quickly too). There are multiple answers to this but the one I prefer is:
1). Each on-chain actor is responsible for providing data dumps to Verra using their specified format (likely included as part of the partnership agreement anyway).
2). The data dumps include not one row per credit, but one row for each change in lifecycle state to the credit since the last extract (e.g. if a credit is tokenised, detokenised, tokenised again, and then retired, then the extract would include 4 rows for that credit).
3). Verra loads these data sets using an ETL process they develop into a common data store. The data shared could be as simple as the Verra credit id, the tokenisation id, the on-chain organisation (Toucan, C3, etc), and the state the credit entered into, but would likely include some KYC requirements (like the name of the company utilising the offsetting benefit).
On Verra's side it requires a fairly simple design change, so that the table in one of their data stores representing Credits has a relationship to a new Credit Event table, which will store one row for each change in lifecycle (populated from Verra's system or from data dumps from the on-chain market participants), linked to the Credit Id in the Credit table (and also a new reference table to hold the valid set of Credit Lifecycle States). This brings all off-chain and on-chain data together, for Verra to review in a relational format that they are comfortable with, and including the ability to easily piece together the full lifecycle of a credit from creation through to retirement/transfer/cancellation regardless of which organisations and systems it has travelled through.
It also provides flexibility for the future, as new on-chain carbon participants emerge they will plug into the same process (requiring no change from Verra's side and a small amount of development from the on-chain side), and any new developments in how credits are viewed could be added easily by Verra as it saw fit via creating a new row in the Credit Lifecycle State table, a change to the existing data load interface, and a definition of how that state is entered into (e.g. maybe in the future Verra requires an "on-chain" transfer state to signify say the movement of a credit from one wallet to another - e.g. into a Liquidity Pool - or a state to signify retirements by corporates and retirements by individuals separately).
Hopefully this description makes sense - appreciate it is difficult to describe abstract concepts in text without some pictures. I'd be happy to knock up a couple of diagrams (say a process flow and data model) and a couple of examples to highlight these concepts better if people were interested?