Crypto Training

Rounding in DeFi: When Dust Becomes an Oracle

Integer math is deterministic. Your rounding policy is not. This post connects fixed-point arithmetic, share accounting, and real exploit patterns where dust becomes profit.

Crypto Training2026-02-1110 min read

Rounding bias diagram

Most DeFi math bugs are not “someone forgot SafeMath”.

They are:

  1. a rational rounding choice made in isolation
  2. repeated in a place where the attacker chooses the repetition count
  3. amplified with flash liquidity, MEV, or a loop that was assumed “too expensive”

People call this “rounding error”, but the better name is rounding policy.

Rounding is a policy decision that assigns ownership of dust.

The attacker model: repetition beats precision#

If you round in a user’s favor once, you might lose a cent.

If you round in a user’s favor in a loop the user controls, you built an oracle:

  • they can observe the dust
  • they can decide whether to continue
  • they can stop when the cumulative bias exceeds gas

This is why “not exploitable because it’s small” is rarely true on chain.

Fixed-point arithmetic: the three places rounding hides#

Almost every DeFi codebase has at least one of these:

  1. share accounting (shares <-> assets conversions)
  2. pricing (spot, TWAP, virtual prices, Q64.96-style fixed point)
  3. fee growth (accumulators updated by multiplication + division)

Rounding hides in:

  • mulDiv implementations
  • integer division (/) that floors by default
  • conversions between decimals (6 decimals USDC vs 18 decimals internal units)

The mistake that keeps reappearing: algebra is not the same as integer arithmetic#

Developers often refactor expressions for readability:

  • from a * b / c
  • to a * (b / c)

Those are equivalent over real numbers.

They are not equivalent over integers.

The integer version can lose precision earlier, changing who gets the dust.

This is why mature codebases centralize math into a small set of well-tested primitives instead of writing bespoke arithmetic in 20 places.

A mental model that actually works#

Ask: who chooses the input and how many times can they apply it?

Then choose rounding direction that is adversarially stable.

Here is a table I use when auditing:

OperationTypical actorSafe default roundingWhy
mint shares on deposituntrusted userround down (floor)prevents free share dust
compute assets on withdrawuntrusted userround up (ceil)user pays for precision loss
protocol fee accrualprotocolround in protocol favorprevents fee leakage
repay debtborrowerround up (ceil)prevents underpayment
liquidation seizeliquidatorround down (floor)avoids over-seizing due to precision

You can disagree with the policy, but you must be consistent.

Inconsistent rounding across two paths is how you get: “deposit via path A, withdraw via path B, profit”.

The decimals trap: 6 decimals in, 18 decimals out, and a silent tax#

Decimals are an underrated source of rounding leakage because they create implicit conversions.

Here is a real-world example class:

  • user deposits USDC (6 decimals)
  • protocol accounts internally in 18 decimals
  • protocol later pays out using a different rounding path

If conversion is inconsistent, dust becomes systematic.

ConversionCommon bugExploit shape
amount18 = amount6 * 1e12performed late, after a divisionattacker picks amounts that lose precision before scaling
amount6 = amount18 / 1e12rounds down by defaultprotocol silently underpays users (or, worse, under-collects debt)
mixed decimals mathdivides before multiply in one pathdeposit+withdraw mismatch

Auditor trick: search for 1e12 and inspect every site where it appears.

Concrete exploit pattern: share dilution by dust loops#

Consider a vault that mints shares:

SOLIDITY
shares = assets * totalShares / totalAssets;

If shares is rounded up, a user can mint slightly more shares than they paid for.

If there is any path where:

  • deposits can be repeated cheaply (or batched), and
  • withdrawals are not symmetric in rounding,

then “slightly more” becomes “extractable”.

The loop usually looks like this:

  1. deposit the minimum amount that still rounds favorably
  2. receive one extra unit of shares once in a while
  3. repeat until you have a measurable share edge
  4. exit in a path that rounds in a different direction

It does not need to be huge. It needs to be:

  • automatable
  • low risk
  • scalable (bots, bundles)

A toy numeric demonstration (why "just 1 wei" matters)#

Assume:

  • totalAssets = 1_000_000
  • totalShares = 1_000_000
  • you mint with ceil, redeem with floor

An attacker deposits an amount that produces:

  • exact share = 1.0000003
  • minted share with ceil = 2

They gained almost a full share unit of dust for that deposit.

If they can repeat this in a loop, and if there exists any path that lets them redeem shares without paying back the rounding edge, they have a printer.

The numbers here are toy. The lesson is the same:

If the caller can choose the “fractional” boundary, they can turn dust into a strategy.

The hardest part: rounding interacts with external calls#

Rounding becomes dangerous when you do it around an external call:

  • you compute expectedOut with rounding
  • you call a DEX / token / hook
  • you settle based on the computed value instead of the delta

In adversarial environments, deltas are the truth.

Return values are claims.

This is why “measure the balance delta” is a security pattern, not just a defensive coding trick.

mulDiv: use a reviewed 512-bit implementation#

The naive code I showed above ((x * y) / d) is not safe for production because x * y can overflow 256 bits even when the final result fits.

This is the second reason rounding bugs show up in incidents:

  1. teams avoid overflow by rearranging arithmetic
  2. rearrangement changes rounding behavior
  3. rounding behavior becomes exploitable

The correct move is to use a reviewed mulDiv that:

  • computes the 512-bit product
  • divides with a known rounding direction
  • handles edge cases explicitly

If you are writing a protocol, this is not the place to improvise.

A practical implementation: explicit rounding helpers#

Do not sprinkle / all over protocol code.

Make rounding explicit and name it.

SOLIDITY
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.24;

library Rounding {
  function divDown(uint256 x, uint256 d) internal pure returns (uint256) {
    return x / d;
  }

  function divUp(uint256 x, uint256 d) internal pure returns (uint256) {
    if (d == 0) revert();
    if (x == 0) return 0;
    return (x - 1) / d + 1;
  }

  // mulDivDown and mulDivUp are where most "dust oracles" are born.
  function mulDivDown(uint256 x, uint256 y, uint256 d) internal pure returns (uint256) {
    return (x * y) / d;
  }

  function mulDivUp(uint256 x, uint256 y, uint256 d) internal pure returns (uint256) {
    if (d == 0) revert();
    if (x == 0 || y == 0) return 0;
    return ((x * y) - 1) / d + 1;
  }
}

Yes, this can overflow if you do not use a 512-bit mulDiv. In production code you should use a well-reviewed implementation (OpenZeppelin, Solmate, PRBMath) rather than rolling your own.

The reason to show the naive version is to highlight the policy surface: the difference between down and up is not “minor”. It assigns ownership.

A graph you should keep in your head: error accumulation#

Rounding error is usually bounded per operation, but the sign of the error is what matters.

If the error consistently benefits the attacker, the accumulation is linear in the number of operations.

If the attacker can cheaply loop the operation, the accumulation becomes profitable.

Here is the shape:

CODE
profit
  ^
  |          *
  |        *
  |      *
  |    *
  |  *
  +------------------> iterations

Your mitigation is not “increase precision”.

Your mitigation is to remove the attacker’s control over iteration count (caps), or to choose rounding direction that makes iteration self-defeating.

Testing: properties that catch rounding attacks#

Example-based tests rarely catch rounding exploits because the attacker’s win condition is “repeat until it works”.

Better properties:

  1. no free value: a round-trip deposit+withdraw should not increase assets
  2. monotonicity: adding assets should not decrease shares
  3. bounded error: rounding error per operation should be <= 1 unit (or a known bound)
  4. symmetry: conversions should be consistent across code paths

Here is a Foundry-style invariant sketch (conceptual):

SOLIDITY
contract VaultInvariants {
  Vault vault;
  IERC20 asset;

  function invariant_roundTripDoesNotProfit() public {
    uint256 a0 = asset.balanceOf(address(this));
    vault.deposit(1e6); // try dust-sized deposits
    vault.withdraw(vault.balanceOf(address(this)));
    uint256 a1 = asset.balanceOf(address(this));

    // allow tiny drift if protocol takes fees, but never positive drift
    assert(a1 <= a0);
  }
}

If this invariant fails only for certain dust ranges, you found a rounding oracle.

Rounding and Uniswap-style fixed point (why v3/v4 math makes this worse)#

Uniswap v3 popularized dense fixed-point representations (Q64.96) and accumulator-based accounting.

That style is excellent for gas and precision, but it creates many sites where:

  • multiplication happens before division
  • the system relies on consistent rounding across many updates
  • attacker-controlled operations (swaps) happen extremely frequently

If you are implementing hook logic around swaps, you are sitting on an iteration loop controlled by adversaries.

That is exactly the environment where rounding becomes extractable.

Fee growth and accumulators: rounding errors that compound#

AMM designs often use accumulators:

  • feeGrowthGlobal
  • feeGrowthInside
  • per-position snapshots

These systems typically:

  1. track value in a high-precision unit
  2. multiply by liquidity
  3. divide back to “token units”

If rounding is inconsistent across:

  • global updates
  • per-position updates
  • claim paths

you can get one of two outcomes:

  • users systematically lose dust (a silent tax)
  • attackers can craft positions that capture dust repeatedly (extractable leakage)

This is also why “just use more precision” is not a complete fix.

The bug is often not “precision too low”.

The bug is “precision lost at different times in different paths”.

Rounding + MEV: when you accidentally create a per-block lottery#

If a rounding edge is small but deterministic, MEV turns it into a lottery:

  • searchers simulate whether a block contains a profitable rounding edge
  • they include the action only when it is profitable

This is the same dynamic as sandwiching, just with math instead of slippage.

If your protocol has a claim/mint path that is:

  • callable by anyone
  • sensitive to rounding
  • profitable only in certain states

assume it will be harvested by bots.

The "rounding audit" pass: what I look for in a codebase#

When I do a rounding-focused pass, I do not start with math libraries.

I start with where rounding can be repeated:

  • deposits/mints
  • withdrawals/redeems
  • claiming rewards
  • swapping with fee rebates
  • liquidation paths

Then I look for:

  • inconsistent rounding between symmetric paths
  • implicit conversions between decimals
  • arithmetic refactors done “for readability”
  • any place where rounding interacts with an external call (settlement)

If you find one bad site, do variant analysis: the same mistake is usually copy-pasted.

A more realistic attack surface: rounding at boundaries#

In real protocols, rounding often bites at boundaries:

  1. vault boundary: ERC-4626 conversions (convertToShares, convertToAssets)
  2. oracle boundary: converting price feeds to internal units
  3. token boundary: 6-decimal assets inside 18-decimal math
  4. settlement boundary: external call happens between two computations

If you want a high-leverage audit approach:

  • find boundaries
  • audit rounding at boundaries

The boundary is where assumptions change. That is where attackers live.

Watch: rounding errors as an exploit primitive#

If you want a compact “attacker mindset” view of rounding bugs, this is one of the better short talks: it treats rounding as something adversaries repeat and compose, not as a one-off off-by-one.

Further reading#