Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
📝 WalkthroughWalkthroughAdds a new SparkDEX v4 adapter that fetches pools from V4_POOLS_API (5s timeout), filters pools with token0/token1 and TVL>0, normalizes APR sources (FEE full, RFLR 50%, other rewards full), converts APR→APY, picks best source per pool, and returns pool metadata with apyBase and optional apyReward. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Client as Client
participant Adapter as Adapter\n(src/adaptors/sparkdex-v4)
participant V4API as V4_POOLS_API
Client->>Adapter: call apy()
Adapter->>V4API: fetch pools (5s timeout)
V4API-->>Adapter: return chains, pools, vaults
Adapter->>Adapter: validate/filter pools (token0/token1, TVL>0)
Adapter->>Adapter: normalize APR sources (FEE 100%, RFLR 50%, others 100%)
Adapter->>Adapter: convert APR→APY and compute effective APYs
Adapter->>Adapter: compare pool vs vault APYs, choose best source
Adapter->>Adapter: build pool metadata (apyBase, apyReward?, rewardTokens?)
Adapter-->>Client: return array of pool metadata
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment Tip CodeRabbit can approve the review once all CodeRabbit's comments are resolved.Enable the |
|
The sparkdex-v4 adapter exports pools: Test Suites: 1 passed, 1 total |
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/adaptors/sparkdex-v4/index.js`:
- Around line 31-42: getBestAprSource currently compares raw pool.apr and
vault.apr but the emitted yields apply a haircut (e.g., halving RFLR) and a
reward-type mapping, so change the selection to compare the same normalized APR
values you export: compute an effectiveApr for the pool and for each vault by
calling the same APR-normalization function/logic used when emitting yields
(apply the RFLR half, and the reward-type-to-percentage mapping instead of
treating non-FEE/non-RFLR as 100%), then use those effectiveApr values in the
Math.max(...) and in the reduce(...) comparator inside getBestAprSource so the
chosen source matches post-haircut/exported APR; keep the existing type checks
(typeof v.apr === 'number' etc.) when deriving inputs.
- Around line 45-46: The apy function currently calls axios.get(V4_POOLS_API)
without a timeout which can hang; update the request to include an explicit
timeout (e.g., 5000 ms) via axios.get(url, { timeout: <ms> }) or use a shared
axios instance with a default timeout, so the call inside apy rejects on
slow/unresponsive upstreams; modify the apy function's axios.get invocation
accordingly.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 6ab5c4cf-ebbe-45ad-ab70-338438871297
📒 Files selected for processing (1)
src/adaptors/sparkdex-v4/index.js
|
The sparkdex-v4 adapter exports pools: Test Suites: 1 passed, 1 total |
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
src/adaptors/sparkdex-v4/index.js (1)
42-47:⚠️ Potential issue | 🟠 MajorRestrict reward APR aggregation to the documented 2–7 types.
Lines 42-47 and 104-109 still bucket every non-
FEE/non-RFLRentry intootherRewardApr. If SparkDEX adds a new APR type, this adapter will start counting it as a 100% reward in both source selection and emittedapyReward, which breaks the mapping described in this PR.Suggested fix
+const REWARD_APR_TYPES = new Set([ + AprTypeId.DINERO, + AprTypeId.PICO, + AprTypeId.BUGO, + AprTypeId.SPRK, + AprTypeId.DELEGATION, + AprTypeId.CUSDX, +]); + +const splitAprs = (aprs = []) => + aprs.reduce( + (acc, a) => { + const apr = typeof a.apr === 'number' ? a.apr : 0; + if (a.type === AprTypeId.FEE) acc.fee += apr; + else if (a.type === AprTypeId.RFLR) acc.rflr += apr; + else if (REWARD_APR_TYPES.has(a.type)) acc.other += apr; + return acc; + }, + { fee: 0, rflr: 0, other: 0 } + ); + function getEffectiveTotalApy(source) { - const aprs = source.aprs || []; - const feeApr = aprs - .filter((a) => a.type === AprTypeId.FEE) - .reduce((s, a) => s + (a.apr || 0), 0); - const rflrApr = aprs - .filter((a) => a.type === AprTypeId.RFLR) - .reduce((s, a) => s + (a.apr || 0), 0); - const otherRewardApr = aprs - .filter( - (a) => - a.type !== AprTypeId.FEE && a.type !== AprTypeId.RFLR - ) - .reduce((s, a) => s + (a.apr || 0), 0); + const { fee: feeApr, rflr: rflrApr, other: otherRewardApr } = splitAprs( + source.aprs + ); const apyBase = calculateApy(feeApr); const rflrApy = calculateApy(rflrApr); const otherRewardApy = calculateApy(otherRewardApr); @@ - const feeApr = aprs - .filter((a) => a.type === AprTypeId.FEE) - .reduce((s, a) => s + (a.apr || 0), 0); - const rflrApr = aprs - .filter((a) => a.type === AprTypeId.RFLR) - .reduce((s, a) => s + (a.apr || 0), 0); - const otherRewardApr = aprs - .filter( - (a) => - a.type !== AprTypeId.FEE && a.type !== AprTypeId.RFLR - ) - .reduce((s, a) => s + (a.apr || 0), 0); + const { fee: feeApr, rflr: rflrApr, other: otherRewardApr } = + splitAprs(aprs);Also applies to: 98-109
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/adaptors/sparkdex-v4/index.js` around lines 42 - 47, The current otherRewardApr aggregation (variable otherRewardApr in src/adaptors/sparkdex-v4/index.js) filters only by excluding AprTypeId.FEE and AprTypeId.RFLR, which will incorrectly include any future APR types; change the filter to explicitly include only the documented reward types (the specific AprTypeId enum values representing the 2–7 reward types used by this adapter) before summing apr.apr, and apply the same explicit-type whitelist change to the other identical aggregation block around the 98-109 range so only the documented reward types contribute to otherRewardApr and emitted apyReward.
🧹 Nitpick comments (1)
src/adaptors/sparkdex-v4/index.js (1)
1-1: Prefer the sharedaprToApyhelper over a local copy.
src/adaptors/utils.jsalready exports this conversion. Reusing it here keeps compounding logic aligned with the rest of the adapters and avoids formula drift.Suggested refactor
const axios = require('axios'); +const { aprToApy } = require('../utils'); @@ -const calculateApy = (_apr) => { - const APR = _apr / 100; - const n = 365; - const APY = (1 + APR / n) ** n - 1; - return APY * 100; -}; +const calculateApy = aprToApy;Also applies to: 22-27
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/adaptors/sparkdex-v4/index.js` at line 1, Replace the local APR→APY conversion with the shared helper: remove the local aprToApy implementation in src/adaptors/sparkdex-v4/index.js (the function around lines 22–27) and instead import the aprToApy exported from src/adaptors/utils.js, then call that imported aprToApy wherever the local function was used; ensure the require/import name matches aprToApy and update any references to the original local function accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/adaptors/sparkdex-v4/index.js`:
- Around line 64-67: The current vaultEffectives mapping incorrectly zeroes
vaults based on the legacy top-level v.apr; remove that gate and instead base
the check on the modern breakdown (v.aprs) or simply always call
getEffectiveTotalApy so valid breakdowns aren't ignored. Update the vaults.map
callback (vaultEffectives) to stop checking typeof v.apr and either check for a
present/usable v.aprs (or use getBestAprSource/v.getBestAprSource if available)
before returning 0, otherwise call getEffectiveTotalApy(v) to compute the
effective APY.
---
Duplicate comments:
In `@src/adaptors/sparkdex-v4/index.js`:
- Around line 42-47: The current otherRewardApr aggregation (variable
otherRewardApr in src/adaptors/sparkdex-v4/index.js) filters only by excluding
AprTypeId.FEE and AprTypeId.RFLR, which will incorrectly include any future APR
types; change the filter to explicitly include only the documented reward types
(the specific AprTypeId enum values representing the 2–7 reward types used by
this adapter) before summing apr.apr, and apply the same explicit-type whitelist
change to the other identical aggregation block around the 98-109 range so only
the documented reward types contribute to otherRewardApr and emitted apyReward.
---
Nitpick comments:
In `@src/adaptors/sparkdex-v4/index.js`:
- Line 1: Replace the local APR→APY conversion with the shared helper: remove
the local aprToApy implementation in src/adaptors/sparkdex-v4/index.js (the
function around lines 22–27) and instead import the aprToApy exported from
src/adaptors/utils.js, then call that imported aprToApy wherever the local
function was used; ensure the require/import name matches aprToApy and update
any references to the original local function accordingly.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: c08868b3-fb89-4c45-a70a-f7a14c91f77e
📒 Files selected for processing (1)
src/adaptors/sparkdex-v4/index.js
|
The sparkdex-v4 adapter exports pools: Test Suites: 1 passed, 1 total |
There was a problem hiding this comment.
hey @0xNx, thanks for the PR. A couple of minor issues:
- Can you get TVL / APY from onchain?
- Use native APY instead of the getBestAprSource pls as this can be misleading to users
- Add informative comment regarding 50% penalty to rFLR
1, add utils.keepFinite 2, add other rewards types 3, show only native APY 4, Add informative comment regarding rFLR APY ratio
|
The sparkdex-v4 adapter exports pools: Test Suites: 1 passed, 1 total |
Resolved review comments in commit: afe4fb2 1, add utils.keepFinite For this comment: Can you get TVL / APY from onchain |
There was a problem hiding this comment.
🧹 Nitpick comments (1)
src/adaptors/sparkdex-v4/index.js (1)
30-33: Consider consolidating the duplicated rFLR rationale comment.The same explanation appears twice; keeping a single source-of-truth comment near
RFLR_APY_RATIOreduces drift risk.Also applies to: 79-82
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/adaptors/sparkdex-v4/index.js` around lines 30 - 33, Consolidate the duplicated rFLR rationale by keeping the explanatory comment only next to the constant RFLR_APY_RATIO and removing the repeated copy elsewhere; ensure the retained comment explains the 1*100% + 11*50% over 12 months calculation and the resulting 13/24 ratio so future readers see the single source of truth tied to RFLR_APY_RATIO, and delete the other instance (the comment duplicate near the later block) to avoid drift.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@src/adaptors/sparkdex-v4/index.js`:
- Around line 30-33: Consolidate the duplicated rFLR rationale by keeping the
explanatory comment only next to the constant RFLR_APY_RATIO and removing the
repeated copy elsewhere; ensure the retained comment explains the 1*100% +
11*50% over 12 months calculation and the resulting 13/24 ratio so future
readers see the single source of truth tied to RFLR_APY_RATIO, and delete the
other instance (the comment duplicate near the later block) to avoid drift.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 0676a776-fe61-43f3-aaf3-0194e0c238ec
📒 Files selected for processing (1)
src/adaptors/sparkdex-v4/index.js
|
The sparkdex-v4 adapter exports pools: Test Suites: 1 passed, 1 total |
Ok so the rewards APY needs to use the API but how about TVL and Native APY, can we get those from onchain? |
Yes, both TVL and native APY (fee APY) can be obtained on-chain. TVL is straightforward, while the fee calculation is a bit less efficient, as it requires querying swap event logs from the last 24 hours. Please let me know how you think. |
Add SparkDEX V4 adaptor (Flare)
Summary
Adds an adaptor for SparkDEX V4 on Flare (chainId 14). SparkDEX V4 is powered by Algebra Integral v1.2.2 and exposes pool and vault (Steer, Ichi, etc.) APRs via a public API.
Protocol / chain
sparkdex-v4)Data source
https://api.sparkdex.ai/dex/v4/pools?chainId=14&dex=SparkDEXImplementation notes
Pool vs vault APR
The API returns pools with optional
vaults(e.g. Steer, Ichi). For each pool we take the highest APR among the native pool and all its vaults, and use that source’saprsbreakdown for base and reward.APR types
The API provides
aprs[]with atypefield. We map them as:0(FEE) → base APY1(RFLR) → reward APY, scaled by 50% (early exit penalty, aligned with sparkdex-v3.1)2–7(DINERO, PICO, BUGO, SPRK, DELEGATION, CUSDX) → reward APY at 100%Output
Each pool is emitted with
pool,symbol,project: 'sparkdex-v4',chain: 'flare',tvlUsd,apyBase,underlyingTokens, and when applicableapyRewardandrewardTokens: [rFLR].Testing
From the repo root or from
src/adaptors:Or from project root:
npm_config_adapter=sparkdex-v4 npm run test --prefix src/adaptorsAll tests (allowed fields, unique pool ids, protocol slug, APY types, etc.) pass.
Checklist
src/adaptors/sparkdex-v4/sparkdex-v4(protocol must be listed on DefiLlama TVL)apy()andurlSummary by CodeRabbit