Log In     Register    

DarkMX Support Forum
Questions and comments about the software
<<  Back To Forum

Decoupling resource links from users

by Guest on 2024/01/08 11:02:22 AM    
Currently looks like any dir or file are directly associated with the 52 char long user's onion address 1:1.

The fundamental question here is
Is there a possibility to improve location anonymity?

Which brings the first two on narrower scope:

1. Will there be any benefits with association resource links with networks, channels or some extra glue id like hash from something (like concat of IDs)?

2. If yes, then the approach is just without implementation yet - when it might be achievable?
On the other hand if the answer to Q.1 is no, then that splits to alternatives. Maybe user network aliases or pool of onions switching addresses in resource links dynamically (var instead of hard coded link)?

Best would be a master keypair that decoupled from transport address. Allowing extras like separate devices/profiles under single user's jurisdiction. But that would be another design?
by KH on 2024/01/11 06:46:45 PM    
What you said at the end is exactly right.  It would require a major redesign to make a system of blinded keys for locating user+content.

To really separate content identity from user identity will require the library to hash all files.  Then links can then be a simple file hash + name.  The problem is that on very large libraries it is cumbersome to have to read+hash the whole thing, which is what I was trying to avoid in the first place.  Maybe it's time to reconsider that.
by Guest on 2024/01/12 07:20:35 AM    
Can't we have the benefits of both?

Meaning several things at once:

1. location decoupling
2. exclude the need to hash the content of files/collections thus allowing to perform content updates within
3. utilizing the required hash function from concat of several quick to compute metadata values?

What could the #3 be?
by Guest on 2024/01/12 07:57:03 AM    
Could the #3 hash be taken from a simple concatenation of the following:

A. Secret sharing key that all the stuff sharing users posess (location independent!)

B. Collection virtual directory/bundle locator (device independent!), i.e. /tech/it/darkmx/design/suggestions/dev_lab/45

?

The link to a resource could be that hash above plus a short name alias containing first few lettets from a file name following its order number into the sorted list:

/ftj7r54br792hsak702gw6c5dit5/decoup45
by Guest on 2024/01/12 08:24:05 AM    
Maybe suffixed with the size of resource for easier searches and splitting on transfers between peers according to the size ranges like

1 - up to 255 bytes
2 - 256b-64k
3 - 64k-2M
4 - 2M-64M
5 - 64M-4G
...

So the link above would be like

/ftj7r54br792hsak702gw6c5dit5/decoup45/2

or

/ftj7r54br792hsak702gw6c5dit5/decoup45.2

or

/ftj7r54br792hsak702gw6c5dit5/decoup45#2


Thus we'd get the decoupling of (re)sources from transport addresses and keeping everything lean w/out need to use RAM for the shared content hashing task

Thoughts?
by Guest on 2024/01/13 12:55:54 PM    
Out of my knowledge range here, but what about a two part hashing system index that would have the original hash for resource identity and then a version hash if it gets edited, updated?

And ideally a uniform system like this that is usable across any file transfer protocol, be it torrents, fopnu, darkMx etc?

That way, a single hash table would provide for all apps simultaneously, reducing RAM load on system.

Might need BT V 3?

Dreaming
by KH on 2024/01/14 10:40:26 PM    
A. Secret sharing key that all the stuff sharing users posess (location independent!)

Is this sharing key going to be different for every user?  Otherwise it won't be a secret.

Aside from that, given just a link, can you take us through the steps the software would have to go through to translate the link into an onion to actually download the content from?

Keep in mind, it is possible for a single client to setup a few different onion services simultaneously, if that helps.

Also need to prevent a malicious user from poisoning a link, in an open protocol environment.
by Guest on 2024/01/16 06:25:03 PM    
The idea is that the key is secret in the sense that all nodes of the ring (or circle) that do participate in seeding have the same key thus resource links will be the same. Be it a single user's nodes somewhere apart or a bunch of guys. Onion balance https://gitlab.torproject.org/tpo/onion-services/onionbalance  , https://onionbalance.readthedocs.io/en/latest/use-cases.html  could scale up to 60 nodes (*1). Even if the limit set to 50, that would provide a basis for circles. The aim is to have a resilience to some location shutdowns.

If Tor hidden services load balancing could provide a link that looks the same for those up to 50 nodes then great. If not, then the link should point to an onion service that works like a connection broker which makes it a single point of failure and not so desirable. On the other hand the later could provide a solution to have several connection brokers (imported manually or as a part of the current Network concept?) just in case some node couldn't be trusted anymore so a new regenerated resource link starts to provide redirects to the nodes that remain trusted.

Links:
*1 https://tor.stackexchange.com/questions/13/can-a-hidden-service-be-hosted-by-multiple-instances-of-tor
*2 https://tahoe-lafs.org/trac/tahoe-lafs/ticket/2144

If the composite links can't provide the needed ease of p2p the other approach would be re-implementing RetroShare with content hashing AND invent some tricks to severely reduce memory footprint.
by Guest on 2024/01/19 12:01:17 AM    
The problem with using a signature scheme (eg. Tor hidden service) to secure a file link is that everyone who hosts that exact link has to keep a private key secret.  If the key gets out, anyone can poison the file.  If you use a different key, the link is different, it's a different swarm.

It would be better to just host files directly on a network/channel.  Eventually those could be load-balanced but it gets complicated keeping the data synced between instances.
by Guest on 2024/01/19 07:16:32 AM    
content hashing AND invent some tricks to severely reduce memory footprint.

Fopnu already does exactly this.  It uses disk files in a special directory to keep the file hash fields out of RAM.  You can share a lot of files before even consuming a gig of RAM.

But if you want immutable content links not associated to a particular user, hashing is a certain price that must be paid.  So every file must be read to be shared, and also a small amount of CPU to compute the hash.  For a library that has several tb and a hundred thousand files it can take a very long time.

There are compromises on both ends.
by Guest on 2024/01/19 09:39:00 PM    
It would be better to just host files directly on a network/channel.
Sure, as that decouples from user/owner's address. Although network/channel address is also an onion that could be eventually located? Then network/channel has to have a means to hide (not list) owners' addresses to have any positive effect.

Eventually those could be load-balanced but it gets complicated keeping the data synced between instances.
In that case custom syncing schemes between trusted nodes would be a better approach.

Fopnu already does exactly this
Then the obvious way to go would be implementing Fopnu approach within Tor or I2P network. Will that be more efficient than RetroShare?

But if you want immutable content links not associated to a particular user, hashing is...
Can we have a mutable (i.e. editable/updatable/rename-able) content with immutable pointing links not associated with a user?

Like a link, that is a text file containing the following:

{ hash of concatenation of network/channel address with generic (immutable) name of a bundle/collection/sub-tree/file - that is also the name of that link }
{ generic (immutable) name exposed for searches, goes into hash }
{ common size range interval (as described in posts above) and the type of files/media exposed for search, not hashed }
{ local links to files/sub-tree of the collection to share, not hashed }
{ possibly integrity check id }

That with ability to change the network address but then hash changes as well.
Doable?




This web site powered by Super Simple Server