Skip to content

1986 - Fix duplicate pool stats #1987

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: master
Choose a base branch
from
Open

Conversation

Cmdv
Copy link
Contributor

@Cmdv Cmdv commented Jul 8, 2025

Description

This fixes #1986

Due to complexity of partial entries and rollbacks it was just as performant to add a unique constraints to the table

Checklist

  • Commit sequence broadly makes sense
  • Commits have useful messages
  • New tests are added if needed and existing tests are updated
  • Any changes are noted in the changelog
  • Code is formatted with fourmolu on version 0.10.1.0 (which can be run with scripts/fourmolize.sh)
  • Self-reviewed the diff

Migrations

  • The pr causes a breaking change of type a,b or c
  • If there is a breaking change, the pr includes a database migration and/or a fix process for old values, so that upgrade is possible
  • Resyncing and running the migrations provided will result in the same database semantically

If there is a breaking change, especially a big one, please add a justification here. Please elaborate
more what the migration achieves, what it cannot achieve or why a migration is not possible.

@Cmdv Cmdv requested a review from a team as a code owner July 8, 2025 16:16
Copy link
Contributor

@kderme kderme left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This adds a unique key, but doesn't change the function that inserts the values. So if dbsync tries again to insert a duplicate, it will crash, instead of doing a no-op.

Also, instances that upgrade to this version and already have duplicates, would fail to create the unique key altoether.

Finally, any idea what could be the root cause of creating a duplicate in the first place?

@Cmdv
Copy link
Contributor Author

Cmdv commented Jul 9, 2025

ok think I've found it:

onlyDelete "PoolStat" [PoolStatEpochNo >. epochN] should be onlyDelete "PoolStat" [PoolStatEpochNo >=. epochN]

deleteUsingEpochNo :: MonadIO m => Word64 -> ReaderT SqlBackend m [(Text, Int64)]
deleteUsingEpochNo epochN = do
countLogs <-
concat
<$> sequence
[ onlyDelete "Epoch" [EpochNo ==. epochN]
, onlyDelete "DrepDistr" [DrepDistrEpochNo >. epochN]
, onlyDelete "RewardRest" [RewardRestSpendableEpoch >. epochN]
, onlyDelete "PoolStat" [PoolStatEpochNo >. epochN]
]
nullLogs <- do
a <- setNullEnacted epochN
b <- setNullRatified epochN
c <- setNullDropped epochN
e <- setNullExpired epochN
pure [("GovActionProposal Nulled", a + b + c + e)]
pure $ countLogs <> nullLogs

Will write tests see if that's the case

@Cmdv Cmdv force-pushed the 1986-duplicate-pool-stats branch from 0d7c288 to 1dfa4ac Compare July 9, 2025 13:50
@Cmdv
Copy link
Contributor Author

Cmdv commented Jul 9, 2025

@kderme should we do a check to see if there are duplicates when starting DB sync and if so delete them? This should cover those who have old corrupted data

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Duplicate entries for pools for epoch 568 on mainnet in pool_stat table
2 participants