Previously we covered how a submarine swap could be used to pay a lightning invoice from an off-grid device, in a trustless manner, in order to send a Blockstream Satellite transmission. Find part 1 of this series, “lntenna-python – Part 1” here:
In this follow-up work we take the approach one step further: making a native lightning transaction over the mesh network using goTenna mesh devices and C-lightning in order to pay for the same Blockstream Satellite Transmission message service, using the “lightningtenna” program to do so. The motivation for this is to remove the “on-chain” component of the submarine swap approach; time (waiting for on-chain confirmations) and additionally multiple on-chain transaction fees per lightning payment.
Mesh network bandwidth constraints
Using a full lightning node from within a mesh network encounters some fundamental limitations that we weren’t exposed to when we used the submarine swap technique in Part 1. Those familiar with mesh networks, and other low bandwidth transmission schemes such as amateur radio, will be well-accustomed with these. The most apparent in testing and development was network bandwidth.
In the case of the goTenna mesh devices, this bandwidth limitation is multi-faceted:
Broadcast-type transmissions (max 3 hops):
> Transmission rate limited to 5 messages per 60 second moving window
> 210 Bytes maximum (binary) message payload
Unicast-type transmissions (max 10 hops):
> Transmission rate ?? (TBD, greater than 5 / 60!)
> 210 Bytes maximum (binary) message payload
This gives us on average maximum of approximately 17.5 B/s or 140 bit/s (~150 baud) in each direction for broadcasts and [~estimate 1kbps] for unicasts. For many applications such as text messaging and GPS coordinate exchange, this is usually enough, however lightning nodes like to exchange messages a little more frequently than this, and some of them are a fair bit larger, as we will see.
Lightning message encryption
In alignment with its strong privacy-forward operational model, all lightning communications are encrypted end-to-end at the transport layer according to the specification found in BOLT8, complete with an authenticated key agreement handshake according to the Noise Protocol Framework, additional messages then become Authenticated Encryption with Associated Data (AEAD) ciphertexts.
This message encryption scheme goes so far as to even encrypt the length of the message, so no simple MITM proxying of messages across the mesh is possible, at least one where we read the message length first, reconstitute the message and then pass it on to the remote node in its entirety.
However this does bring us a clear benefit in our threat model: we can now be sure that a mesh gateway node (mesh <–> internet) who we don’t trust, and who is relaying our lightning messages to the wider internet on our behalf, can neither see their contents or even guess our activity type based on the message lengths.
Lightning node operational messages — the general case
When a lightning node is started and it finds an active bitcoin node backend, network activity begins immediately. It will activate a listening port, usually 9735, to listen for incoming connections from peers.
Next it will try to reconnect to peers it has open channels with, first performing the cryptographic handshake mentioned previously, followed by sending an INIT message to (re)declare what protocol-level features it has active and waiting for the corresponding INIT message in response from the peer.
Next, the node will attempt channel reestablishment for each channel, also verifying that both peers agree on the state of the payment channel.
Once the channel is re-established as being online and ready for use, the node will determine whether it thinks it wants gossip messages to update it’s view of the network graph (used for route-finding of payments) and then request these updates via sending a TIMESTAMP_FILTER message to the channel peer, requesting they send updates within a certain time range. This request might also be made from the channel peer back to our node, over a different date range.
This roughly completes the startup messaging exchange. Measurements show that for a single peer, this equates to ~10 messages exchanged at startup, not counting any actual gossip messages transmitted.
Already we can calculate that passing this through the mesh network will take a minimum of 2 minutes using broadcast, or [< 2 minutes] using unicast, but each mesh node has their own independent limit, and the messages are roughly bi-directional so this becomes ~1 minute using broadcast and [very little] using unicast.
Paying an invoice needs a further 20 messages to be exchanged: 13 from the sender and 7 from the receiver. This means it takes a total of 30 lightning messages to get online, re-establish the channel and negotiate a new HTLC update in order to make a payment.
Users of the lightning network might also notice that even this conservative, minimum-required message count can be blown way up if we encounter bad routes, or any of the other myriad reasons lightning payments can currently encounter which require retries of the payment. We can avoid some of this complexity by focusing on a single application, the Blockstream Transmission API, and connecting to a peer that has a good path to Blockstream’s Lightning node.
Above we mention absence of gossip, and this is for good reason; if your node has been offline for some time, as we would expect a mesh network node to have been, then the gossip updates requested can be relatively expensive. On testnet, where we would expect to see more churn (nodes going on and off line regularly), we measured up to 2.6MB of gossip being requested after a few days offline. This would take us 12,380 mesh network messages to sync, or approximately 41.5 hours of continuous messaging with unicast and [less] using unicast. If we miss a message or deliver one out of order, then we will have to reset the connection with the peer, starting again from the initial handshake and then the gossip sync. This is not ideal.
To counter this, once gossip for the node has been synchronised, traffic over the network reverts back to being very minimal, something in the order of < 12MB daily — a testament to the clever design and improvements to the protocol itself and the node implementations. For most scenarios this is a perfectly acceptable level of traffic.
Even bigger improvements to the gossip protocol are in the pipeline. Already INTIAL_SYNC, a node feature-bit which simply requested “all gossip” from a new peer has been deprecated in favour of gossip_queries which requests that a peer only send you exactly what you ask for. All but the oldest (and probably now CVE vulnerable) nodes now use the new feature-bit.
A true “zero gossip” mode more suitable for mesh network and low bandwidth nodes does not exist yet that we are aware of in the major implementations. We believe that the aims in this area for low-bandwidth users are to push them towards private channels to be combined with trampoline routing schemes (outsourced routing), so that a full network graph is not necessary. When selecting an implementation to us, we found that C-Lightning had a special developer-only RPC, dev-suppress-gossip, which forces nodes to not request any new gossip for newly-connected peers. As of C-Lightning v0.7.3rc1 this mode appears to work extremely well in suppressing the node’s requests for gossip from it’s peers.
This solved only half of the puzzle though, as our remote peer, who we do not necessarily control, might still request gossip from us, causing our node to obediently send back a few MB of, likely outdated, gossip data… We are working on a small C-Lightning patch to reject/ignore/reply with nothing to these requests in a BOLT7-compliant manner.
Due to the aforementioned gossip limitations, for the purposes of this demonstration we used two C-Lightning nodes, one an off-grid mesh node (MESH) and one on the wider testnet network (REMOTE), both controlled by ourselves and both with `dev-suppress-gossip` enabled to stop gossip messages in both directions. In order to enable this mode C-Lightning v0.7.3rc1 (or newer) must be compiled with –enable-developer on both MESH and REMOTE.
When running the tests like this, we encountered another limitation in the C-lightning implementation that required an additional patch on the MESH node. There exists a 30 second limit for the channel to commit the first HTLC before timing out. In general on the lightning network, once the first HTLC has been committed to by your node, it is impossible to know whether the payment will get finalised unless it expires or you receive the preimage, so timeouts like this are in many ways redundant (see also for example: HODL invoices in LND). This additional check in C-Lightning (and likely something similar in other implementations) is designed to improve user experience, and network user experience as a whole by not waiting on a likely don channel for a response for too long.
The first message in the payment sequence is of the order 1560 Bytes. Or in goTenna mesh terms, 8 messages at >= 1m36s of broadcast transmission time or [less] unicast time. After increasing the HTLC commitment timeout on the MESH node to 300 seconds as shown as part of this patch, we are ready to begin paying arbitrary lightning invoice from our off-grid node.
What we have demonstrated here is that the lightning network can both reach and be usable by off-grid users and can in the general case be used over alternative last-mile communications systems, increasing the network’s resilience to infrastructure failure and censorship. This is one more example of how the Lightning Network can improve the scalability of the Bitcoin payment ecosystem.
The project is available via github at https://github.com/willcl-ark/lightningtenna.
The README.md from the repository also includes more technical details of how to setup and use the program.
If you have any more questions about the project or issues using it, please feel free to leave an issue on the GitHub repository, or contact me on Twitter @willcl_ark, or email to [email protected]