Currently the size of the value field is limited to 520 characters for name_upate, ~1020 for name_firstupdate. There is an unintentional restriction (bug) in name_update.
When we fix this bug we might as well further increase the size of the value field.
Please note this does not increase the danger of spamming/flooding the networks or blocks because the fees will be structured in a way such that one large TX will be more expensive than two small TXs.
Khal's suggestion was 9k to let gpg keys fit. Maybe there are other ways by linking to gpg keys or people only use Bitmessage and the like by now. Still, a larger value size might even save the network room as there will have to be fewer "import" statements (which cause an overhead of ~500 bytes for a second TX I guess - from this guess it would today take about 17k to store 9k of value data).
Not having to depend on "import" for space so much would also simplify the implementation of Namecoin services.
We need to find consensus about the new size. I think Khal's suggestion still makes sense. If you think otherwise please explain why.
Previous opinions and concerns:
khal wrote:The issue 3 will need an update of all namecoin clients to be fixed.
Due to this bug, a name_update will fail if previous name_update or name_firstupdate used a value with more than 520 characters. Yes, you read it right...
So, I think we should "benefit" from this forced upgrade to increase the maximum number of characters for the value field, because we already need a general upgrade.
Some possible namecoin usages are already limited by this, like GPG. See https://dot-bit.org/Personal_Namespace#Notes.
Is it dangerous for the network ?
Each transaction above 1K will always have fee, for each 1K of data. If fees are set to the right value, they will do their job.
Each transaction above 10K will be rejected (another network rules).
How many characters ?
I propose to set it to 9000 characters max to allow 8k gpg keys with some other data.
What do you think ?
virtual_master wrote:I agree with you. It would be the best solution.domob wrote:I'm also not sure about 9k - what about fixing the bug so that we end up with the originally planned 1k? (And of course introduce graceful failing for larger values instead of the current behaviour of silently being stuck.) What are the applications you have in mind for 9k were 1k is too little?moa wrote:For the record, I think enlarging the data field limit by 20 times is a bad idea.
1k would be enough at the moment.
As far as I know nobody complained even the 500 byte limit.