Add probing service#815
Conversation
|
👋 Thanks for assigning @tnull as a reviewer! |
|
🔔 1st Reminder Hey @tnull! This PR has been waiting for your review. |
|
🔔 2nd Reminder Hey @tnull! This PR has been waiting for your review. |
|
🔔 3rd Reminder Hey @tnull! This PR has been waiting for your review. |
|
🔔 4th Reminder Hey @tnull! This PR has been waiting for your review. |
|
🔔 5th Reminder Hey @tnull! This PR has been waiting for your review. |
|
🔔 6th Reminder Hey @tnull! This PR has been waiting for your review. |
There was a problem hiding this comment.
Hi @randomlogin, thanks for the work on this! I've reviewed the first two commits:
I've left a bunch of inline comments addressing configuration and public API, commit hygiene, testing infrastructure, and test flakiness.
In summary:
- A couple of items are exposed publicly that seem like they should be scoped to probing or gated for tests only (see
scoring_fee_paramsinConfigandscorer_channel_liquidityonNode). - The probing tests duplicate existing test helpers (
setup_node,MockLogFacadeLogger). Reusing and extending what's already intests/common/would reduce duplication and keep the test file focused on the tests themselves. test_probe_budget_blocks_when_node_offlinehas a race condition where the prober dispatches probes before the baseline capacity is measured, causing the assertion between the baseline and stuck capacities to fail. Details in the inline comment.- A few nits about commit hygiene, import structure, and suggestions for renaming stuff.
Also needs to be rebased.
|
🔔 7th Reminder Hey @tnull! This PR has been waiting for your review. |
|
@enigbe, thanks for a review, the updates are incoming soon. |
436e4a3 to
07dfde4
Compare
ff741c2 to
c31f1ce
Compare
tnull
left a comment
There was a problem hiding this comment.
Thanks for taking this on and excuse the delay here!
Did a first review pass and this already looks great! Here are some relatively minor comments, mostly concerning the API design.
|
🔔 4th Reminder Hey @enigbe! This PR has been waiting for your review. |
tnull
left a comment
There was a problem hiding this comment.
Seems tests are failing right now:
thread 'exhausted_probe_budget_blocks_new_probes' (167312) panicked at tests/probing_tests.rs:381:5:
no probe dispatched within 15 s
failures:
exhausted_probe_budget_blocks_new_probes
probe_budget_increments_and_decrements
f99786b to
1e73e6e
Compare
fe64bd6 to
b11cea0
Compare
Add integration tests that verify the probing service fires probes on the configured interval and respects the locked-msat budget cap. Shared helpers in tests/common are extended with probing-aware setup. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
b11cea0 to
1a8f945
Compare
Changed the exhaust test to be statistical (locked amount never exceeds the cap) instead of trying to turn intermediary routing node offline when it hasn't yet forwarded the probe htlc.
346e002 to
2431f88
Compare
|
Removed usage of unwrap(), reworked the exhaust test not to rely on a node stopping race condition. |
tnull
left a comment
There was a problem hiding this comment.
Excuse the delay here. Looks pretty good, but there are some minor things we might want to address before this can land.
|
|
||
| /// Configures background probing. | ||
| /// | ||
| /// Use [`ProbingConfigBuilder`] to build the configuration: |
There was a problem hiding this comment.
We should probably add another paragraph before this example that gives some context on what background probing is and why users would want to enable it.
There was a problem hiding this comment.
Added documentation on the module level as well as expanded/corrected docs for particular objects (builder).
| use std::{any::Any, sync::Weak}; | ||
|
|
||
| #[cfg(feature = "uniffi")] | ||
| use crate::probing::ProbingConfig; |
There was a problem hiding this comment.
nit: We do these uniffi-specific re-exports via use statements in ffi/types.rs.
There was a problem hiding this comment.
Moved to ffi/types.rs
| }; | ||
| use peer_store::{PeerInfo, PeerStore}; | ||
| #[cfg(feature = "uniffi")] | ||
| pub use probing::ArcedProbingConfigBuilder as ProbingConfigBuilder; |
There was a problem hiding this comment.
Same here, please expose this in ffi/types.rs
There was a problem hiding this comment.
Moved to ffi/types.rs
| } | ||
| } | ||
|
|
||
| /// Configuration for the background probing subsystem. |
There was a problem hiding this comment.
See above: given this is the main probing-related object it might be worth spending a paragraph here (or on the module docs) to explain what probing is and why users would want to enable it in the first place.
There was a problem hiding this comment.
Added documentation on the module level as well as expanded/corrected docs for particular objects (builder).
| LdkEvent::ProbeFailed { .. } => {}, | ||
| LdkEvent::ProbeSuccessful { path, .. } => { | ||
| if let Some(prober) = &self.prober { | ||
| prober.handle_probe_successful(&path); |
There was a problem hiding this comment.
If we handle all probes the same here to reduce the locked amount, we probably also need to mark any amounts that we send manually via preflight probes as locked, as otherwise the accounting is off? Or, maybe it would be preferable to keep track of which paths were for background probing and only reduced the locked amount for them?
There was a problem hiding this comment.
Addressed this by adding inflight_probes hashmap to track our probes.
Changed event handling to fire background probing events only on these (background) probes.
Code may be DRYed (as event handlers share parts of code), but I find the current version more clear.
| short_channel_id: via_scid, | ||
| channel_features, | ||
| fee_msat: amount_msat, | ||
| cltv_expiry_delta: 0, |
There was a problem hiding this comment.
Codex:
- High: RandomStrategy builds invalid final-hop CLTVs. src/probing.rs:584 sets the last hop’s cltv_expiry_delta to 0, but LDK expects the last hop delta to be the destination final CLTV. For multi-hop random probes, the penultimate node can reject the forward as outgoing CLTV too soon, so
the strategy mostly trains failures instead of probing liquidity. Use a real final CLTV delta, e.g. DEFAULT_MIN_FINAL_CLTV_EXPIRY_DELTA.
Seems valid?
|
|
||
| impl ProbingStrategy for RandomStrategy { | ||
| fn next_probe(&self) -> Option<Path> { | ||
| let target_hops = random_range(1, self.max_hops as u64) as usize; |
There was a problem hiding this comment.
Codex:
- Medium: RandomStrategy can produce one-hop paths that send_probe rejects. src/probing.rs:636 samples target_hops starting at 1, and random_walk(1) will only ever emit single-hop paths, which LDK rejects with “No need probing a path with less than two hops.” Clamp/sample to at least two
hops or return None for that config.
Seems also valid, probably makes sense to start at 2?
Addressed all issues mentioned, they have their respective commits. If they are fine, I'm going to squash them. |
a956232 to
f2706d2
Compare
Previously when calculated currently locked amount, we didn't account for preflight probes sent on a payment which could result in an incorrect value of probe locked_msat. Now Prober saves the PaymentId of probes it sent and tracks them on release, ignoring the user-sent ones.
f2706d2 to
8acd55d
Compare
|
|
||
| /// Configures background probing. | ||
| /// | ||
| /// See [`ProbingConfig`] for details. |
There was a problem hiding this comment.
Please make sure the docs on the ArcedNodeBuilder mirror the ones on the regular NodeBuilder.
| LdkEvent::ProbeFailed { .. } => {}, | ||
| LdkEvent::ProbeSuccessful { path, payment_id, .. } => { | ||
| if let Some(prober) = &self.prober { | ||
| if let Some(amount) = |
There was a problem hiding this comment.
Maybe just move this check into handle_background_probe_successful/handle_background_probe_failed? Then you could keep inflight_probes visibility at the module level, too.
| pub interval: Duration, | ||
| /// Maximum total millisatoshis that may be locked in in-flight probes at any time. | ||
| pub max_locked_msat: u64, | ||
| pub(crate) locked_msat: Arc<AtomicU64>, |
There was a problem hiding this comment.
Isn't this redudant to the sum over all inflight_probes now? Probably it's better to keep one state for accounting purposes, to avoid the possibility for them to get out-of-sync.
Added a probing service which is used to send probes to estimate channels' capacities.
Related issue: #765.
Probing is intended to be used in two ways:
For probing a new abstraction
Proberis defined and is (optionally) created during node building.Prober periodically sends probes to feed the data to the scorer.
Prober sends probes using a ProbingStrategy.
ProbingStrategy trait has only one method:
fn next_probe(&self) -> Option<Probe>; every tick it generates a probe, whereProberepresents how to send a probe.To accommodate two different ways the probing is used, we either construct a probing route manually (
Probe::PrebuiltRoute) or rely on the router/scorer (Probe::Destination).Prober tracks how much liquidity is locked in-flight in probes, prevents the new probes from firing if the cap is reached.
There are two probing strategies implemented:
Random probing strategy, it picks a random route from the current node, the route is probed via
send_probe, thus ignores scoring parameters (what hops to pick), it also ignoresliquidity_limit_multiplierwhich prohibits taking a hop if its capacity is too small. It is a true random route.High degree probing strategy, it examines the graph and finds the nodes with the biggest number of (public) channels and probes routes to them using
send_spontaneous_preflight_probeswhich uses the current router/scorer.The former is meant to be used on payment nodes, while the latter on probing nodes. For the HighDegreeStrategy to work it is recommended to set
probing_diversity_penalty_msatto some nonzero value to prevent routes reuse, however it may fail to find any available routes.There are three tests added:
Example output (runs for ~1 minute, needs
--nocaptureflag):For performance testing I had to expose the scoring data (
scorer_channel_liquidity).Also exposed
scoring_fee_params: ProbabilisticScoringFeeParameterstoConfig.TODOs: