Skip to content

Conversation

@ariard
Copy link

@ariard ariard commented Oct 4, 2022

Ongoing, will add improvements suggestions for eltoo-peer-protocol.md and eltoo-transactions.md.

instagibbs and others added 30 commits April 15, 2022 08:48
Gossip query compression is not very useful - it was added for
mobile clients to, in theory, sync the gossip data directly from
P2P peers, but to my knowledge no mobile clients actually use it
for that, or at least use it where the gossip *query* data is a
substantial portion of their overall bandwidth usage.

Further, because of the semantics of `gossip_timestamp_filter`, its
impractical to ensure you receive a reliable, full view of the
gossip data without re-downloading large portions of the gossip
data on startup.

Ultimately, gossip queries are a pretty non-optimal method of
synchronizing the gossip data. If someone wants highly optimized
gossip data synchronization a new method based on set
reconciliation needs to be propose.

Finally, the current gossip query encoding semantics do not allow
for negotiation and instead require all lightning implementations
take a zlib dependency in some form or another. Given the recent
zlib decoding memory corruption vulnerability, this seems like an
opportune time to simply remove the zlib support, requiring that
nodes stop sending compressed gossip query data (though they can
support reading such gossip query data as long as they wish).

This is an alternative to the suggested gossip query encoding
support in lightning#825.
This is a minor clarification that the `to_self_delay` is enforced
in a 2nd-stage transaction for HTLCs, while it's directly enforced
in the commit tx for the main output.
Remove zlib compression gossip query support
- Note: watching for mempool transactions should result in lower latency
HTLC redemptions.

Reacting on transactions broadcasted in the local node mempool is unsafe as a
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

may be unsafe?

It might be a watchtower doing the reacting, and if you're spending the settlement outputs to CPFP, nothing is "locked up".

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure "maybe unsafe" as there is a lot of factors to weigh in here.

Well if you're CPFPing an update or settlement, I think you're assuming some fee-bumping reserves are locked up in the post-anchor model. If you're fee-bumping reserve is global (iirc what LND does) someone could exhaust your fee-bumping reserves on your low-value channel A to then attack your high-value channel B.

That said, about all the implications on watchtower infrastructure and fee-bumping reserve management, I don't know if you would like that to be present in this document.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Settlement tx can use its own outputs to CPFP, thanks to v3+epehemeral anchors.

It's subtle, so I left in a modified version of your text

one block CSV required to avoid pinning by counterparty, with the exception
of a single 0-value anchor output.

The one block CSV delay should be added to the node for any `cltv_expiry_delta`
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oops, I removed all 1 CSV locks but missed this text. Will remove

@instagibbs
Copy link
Owner

thanks, took what I liked

@instagibbs instagibbs closed this Oct 4, 2022
@instagibbs instagibbs reopened this Oct 4, 2022
@instagibbs
Copy link
Owner

oh will leave open in case you have more, I pushed updates

@ariard
Copy link
Author

ariard commented Oct 5, 2022

Yep, added a new commit 8222cec on eltoo-peer-protocol review. Take what you feel is good here, primary motivation is to catchup for myself on current Eltoo channel mental model.

@instagibbs
Copy link
Owner

thanks, took some more 👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants