Sounds Easy Enough
Climate tech has a deceptively simple pitch: monitor data, create carbon credits, sell them, make the world more sustainable.
If only reality were this simple.
Having spent some time in the domain, I can see the pain points that make it genuinely hard to implement. These problems range from the most mundane to borderline philosophical.
The simplified model goes something like this: a farmer adopts regenerative practices like reduced tillage, cover crops and better fertilizer management. These changes improve soil health and sequester carbon, but they also cost money upfront and might reduce yields in the short term. Carbon credits are the mechanism that's supposed to bridge that gap. A third party verifies that the farmer's practices led to measurable environmental improvements, and that verification gets turned into tradeable credits. Companies that need to offset their emissions buy those credits. So the farmer loses a bit of money short term, gains money long term, and the field actually becomes more sustainable.
Banks also enter the picture. If a farmer is enrolled in a verified green program, the bank that finances them gets to signal sustainability credentials. It's less about the loan itself and more about the bank being able to say they're funding eco-friendly agriculture. In a world where ESG reporting matters, that's worth something. In theory, everyone's incentives align: the farmer profits, the buyer offsets, the bank signals eco-friendliness, and the planet benefits.
Don't get me wrong: I do support sustainability efforts, and there are plenty of legitimate actors out there. That said, let's take a look at why it gets messy.
Using Big Words
Before anything can be verified or monetised, you have to deal with data rooted in physical reality. It's easy to underrate the hardness of this problem. We're used to working with massive amounts of data, but most of it is digital by nature. User interactions, financial transactions, they're created and live in software. When you actually have to take measurements from the physical world, that's a fundamentally different challenge. Soil is extremely heterogeneous. What's true for one patch might not hold ten meters away. When you're talking about millions of hectares, you can't possibly sample every cubic meter, and even if you could, there are inherent error rates in sampling techniques. So you end up building models on limited samples, and hoping they generalise well enough to hold up under scrutiny.
This is where dMRV comes in: digital measurement, reporting and verification. The objective sounds clean. Take what a farmer says happened on a field and transform it into something independently verifiable and auditable. And do it at scale.
It starts with declarations. Farmers submit their practices and field-level data tied to a specific harvest year: tillage type, cover crops, fertilizer use and other agricultural decisions. The system validates ownership, adds guardrails against unrealistic data and fraud, checks eligibility and enforces internal consistency. But at this stage the data is still fundamentally self-reported. And while farmers are generally incentivised to participate, that doesn't mean they're always motivated to report accurately.
That's why remote sensing exists as a second stream. Models process satellite imagery across regions and seasons, trained on ground-truth data collected in the field and exposed to variability across countries and years. Their task is to infer whether certain practices likely took place on a given field, for example whether soil disturbance occurred or whether vegetation cover was present between harvest cycles. The output is a set of predictions that can be generated at scale. It's a good solution: high-resolution satellite imagery combined with AI-driven pattern matching scales extremely well. But it's obviously still an approximation.
Before anything can be monetised, the data also needs human oversight. You want to automate as much as possible, but you still need the option for spot checks and manual review. Anything you don't automate or make very streamlined will become a bottleneck in the future. On the other hand, we're talking about money and real users. You have to make sure the contracts you generate are actually based on reality.
The real complexity begins when these two streams meet. The system must reconcile what the farmer declares with what the models predict, while also respecting eligibility criteria, contractual constraints and external verification standards. This is not a simple comparison exercise. It becomes a lifecycle problem.
Farm data has to be linked to corresponding organisations and companies. Fields have to be mapped to farms. Practice data has to be mapped to fields. Contracts are tied to specific years or intervals. Field boundaries evolve over time: a farmer might split a parcel, merge two fields, or redraw a boundary between seasons, and a contract that references last year's geometry now points to something that no longer exists in the same shape. Historical baselines shape eligibility. There might be user error on the admin side. A methodology update can change what counts as eligible, which means data that was correct when it was collected may no longer satisfy the rules it's being evaluated against. The pipeline has to handle that without silently rewriting history.
This leads to another subtle problem. There is a difference between what actually happened in the field and what the system believed at a given moment. A farmer might submit data late. A satellite model might improve and reinterpret older imagery. An admin might correct a boundary months later. The platform therefore has to reason not only about the historical state of the field, but also about the historical state of the system's knowledge. In database terms this is a bitemporal problem: you maintain two timelines, one tracking when something happened in reality and another tracking when the system recorded or learned about it. That separation is what lets you reconstruct past decisions honestly, without letting later corrections quietly overwrite what was known at the time.
Every decision must remain reproducible long after the season has ended.
The end result of the pipeline is not a UI widget or a dashboard metric. It is a verification-grade dataset that may be audited. That requires determinism, traceability and explainability. Years later, it should still be possible to answer a simple but uncomfortable question: why was this particular field considered eligible in that specific season? What was the state of knowledge for that field at that particular point in time?
From an engineering perspective, this forces a certain architecture. Domain models need versioning. Historical states must be preserved rather than overwritten. Background processing must be reliable and consistent across services. User declarations and verification logic must be clearly separated. Cross-service coordination has to be predictable. Audit trails are not optional features, they are foundational. But these are not well-established processes with battle-tested playbooks. You also have to keep things flexible and future-proof. That is the point where climate tech starts to resemble fintech more than typical SaaS products.
"Tell 'em that it's human nature"
All of this sits on top of a messy underlying reality. The EU and national governments create frameworks to incentivise greener behaviour, but they also have to keep economies competitive. Those two goals pull in opposite directions. If you set strict emission limits, you penalise your own producers. If you add protective tariffs to compensate, foreign producers just sell their goods in other markets instead. And if the cost burden gets too high, production itself moves somewhere else and the emissions move with it.
This is a heavily regulated space. It's not a free market emerging from first principles, not a domain where you can extrapolate based on what would be the most rational given self-interested agents. The resulting rules are often a compromise between environmental ambition and economic pragmatism, and some of them can feel counterintuitive and arbitrary.
Take additionality. The idea is simple: you only get rewarded for measurable improvements. On paper, that makes sense. You don't want to pay for something that would have happened anyway.
But once you move from principle to implementation, the edges start to show. If you were an early adopter who already transitioned toward regenerative practices before the incentives existed, you might receive less benefit than someone changing later. The metric rewards change within a defined window, not necessarily long-term consistency or early commitment.
And once money is tied to the size of the improvement, behaviour follows. If your payout depends on how much worse things would have been otherwise, there is a very real temptation to exaggerate that "otherwise." The bigger the claimed delta, the bigger the reward. This is not some theoretical edge case. Whenever financial gain is linked to self-declared baselines, some actors will push the limits, and some will cross them. The system then has to compensate for that. More verification, more rules, more controls. And each added safeguard increases complexity.
To make things worse, the rules themselves are not written by a single authority. Standards bodies like Verra and Gold Standard define the methodologies for how carbon credits are generated, measured and verified. They set the criteria for what counts as a valid credit, what evidence is required and how audits should work. In theory, this provides credibility and consistency across the market. In practice, these methodologies are complex, sometimes ambiguous, and they evolve. Different standards don't always agree with each other, so if you operate across multiple frameworks you're juggling parallel sets of rules that might contradict.
New scientific findings appear, political winds shift with every change of administration, regulatory bodies update their methodologies and auditors refine their expectations. A pipeline that was correct last year might suddenly need to produce slightly different evidence this year. That means systems must remain adaptable without rewriting everything from scratch. Data models need to tolerate new attributes, pipelines need to be rerunnable, and historical results must remain interpretable even after the rules that produced them have evolved. On top of that, which verifier is considered the most credible is itself a moving target. The market's sense of what counts as a credible credit is not static, and a platform that builds too tightly around one framework's assumptions may find itself on the wrong side of that shift.
Engineering Challenges
You need engineers to have some level of domain knowledge so they can operate with agency and make decisions. But you cannot turn every single employee into an expert in agricultural science, EU policy, climate methodology and contract law at the same time. That's just not realistic.
So you bring in lawyers, scientists, policy people. Which is necessary. But that also inevitably creates some level of siloing. At some point, engineers still have to make concrete decisions. And those decisions would not be trivial even with perfect knowledge. But perfect knowledge is not available. Information propagates through organisational layers, and with every level of indirection some of the details and direction will inevitably be lost in translation. Even if you have product-minded engineers who proactively try to fill in the gaps and affect direction, not all context will be visible.
The technical choices carry real weight here. Given the need for auditability and traceability, you might reach for patterns that work in similar domains, like Event Sourcing. But you really have to reason about every architectural decision from first principles, because nobody fully understands everything that will be needed in the future and nobody sees all the processes from one end to the other. You also need flexibility. You need to commit to rigour without locking yourself into assumptions that might not survive the next methodology change or product pivot. That's a hard balance to strike.
And all of this has to survive strategic shifts. If the market or the methodology changes, you might be facing a major migration or a greenfield rewrite on a codebase that was designed around assumptions that no longer hold. That's a universal engineering problem, but the ratio of external volatility to internal control is especially high. The number of independent forces that can break your assumptions is larger than in most domains.
To Wax Political
Like almost everything that touches money and regulation, this space gets politicised quickly.
Different groups try to shape the narrative. Corporate lobbying is one force, pushing for weaker standards or slower implementation when it suits the bottom line. On the other side, activist groups sometimes simplify extremely complex systems into moral binaries. Legitimate mechanisms get reduced to slogans. Nuance disappears. In some cases, the mindset becomes "the end justifies the means," and otherwise well-designed frameworks get pushed in directions driven more by ideology and emotion than by careful systems thinking. Neither side has a monopoly on distortion, and the actual science and engineering gets caught in the middle.
Once these pressures enter the scene, climate tech is no longer just about modelling soil carbon or designing verification pipelines. It becomes entangled in political signalling, economic incentives and public narratives. And engineering decisions end up operating inside that tension, whether the engineers intended to participate in it or not.
For the record: I think this work matters. The mechanisms behind carbon markets are imperfect, but we need to start somewhere. Unfortunately, a small number of bad actors can exploit complexity for profit. A handful of fraudulent projects or inflated baselines can discredit an entire market in a single news cycle. That's disproportionate, but it's how trust and narratives work. Sigh… this is why we can’t have nice things.
As a civilisation we are more or less aligned that we want to keep nature in a liveable state. But individual actors are incentivised to chase short-term wins. The science behind regenerative agriculture is real, the measurement tools are getting better every year, and the potential upside is enormous for everyone. Whether we can build systems and institutions that actually capture that upside without getting buried in politics and misaligned interests is a different question. Just like every other human endeavour, it comes down to whether the people building it care more about the problem than the incentive. Which is a strange thing to struggle with, given that this isn’t even a zero-sum game.
But the underlying problem is worth solving, the tools to solve it exist, and there are people building them right now. The reality will stay messy. Then again, pushing against a messy and indifferent reality might just be what a well-spent existence looks like anyway.