A blocksize increase doesn't fix transaction malleability or quadratic hashing or fix covert asicboost or enable use of lightning network. Also IIRC greg maxwell has said the actual segwit code is like 2-3000 lines whereas the rest is testing.
Segwit is not a 2x block size increase. That is marketing spin that Blockstream started spreading to try and fool people into supporting their agenda w/o having to compromise and raise the block size. Segwit does _not_ increase the block size in the sense that people are discussing and in some cases it might even make the transaction throughput much worse than the current system.
The whole reason people want bigger blocks is for higher transaction throughput on-chain. Segwit doesn't deliver that.
Segwit isn't a a block size increase. It allows for a tiny amount more transactions but it's very clear it's not enough. An actual block size increase (say to 8MB) would solve the current problems.
LN is years from being actually usable by the masses. This is from their developers themselves.
> Maybe it's more proof that no one cares?
That's a pretty idiotic thing to say. The technology isn't ready yet despite the developers being aware of the problem for, as you say, years.
The block size increase with segwit is marginal and a side effect. It's a complicated feature that is meant for purposes other than block size increase. Don't obfuscate the issue. I'm talking about a simple tweaking of a parameter to increase block size.
> Segwit isn't a a block size increase. It allows for a tiny amount more transactions but it's very clear it's not enough. An actual block size increase (say to 8MB) would solve the current problems.
But what about the new problems it would introduce? I run a full node and even now it eats a significant portion of my bandwidth. With even larger blocks I would probably drop off the network altogether. And I'm sure I'm not the only one out there in this situation. Therefore, I don't think Core developers are exaggerating in their concerns about the centralization pressure caused by overly large blocks.
Moreover, wouldn't increasing the block size simply kick the can down the road? One advantage of the current fee pressure is that it strongly encourages the development of 2nd layer solutions. There are right now at least three independent teams working on Lightning Network implementations and they seem to be making quick progress...
But now that we are seeing these slightly bigger blocks, claims are being made that segwit’s method of increasing capacity is actually very inefficient.
In comparing some numbers the 1.6MB block had:
Block size: 1602023
Number of transactions: 833
Input count + Output count: 11073
Bytes per IO address: 144
While a random non-segwit block (483,182) had:
Block size: 999931
Number of transactions: 2110
Input count + Output count: 10574
Bytes per IO address: 95
The segwit block, therefore, which is near the practical limit of bitcoin’s current blocksize rules, was able to handle only 500 more inputs and outputs out of some 11,000. Making it an increase of just 5%, instead of 50% as would be the case for a non-segwit 1.6MB block.
In practice Segwit only increases the blocksize to 1.4 MB for normal transactions and theoretically to 4 MB, but then blocks are filled with special kinds of transactions people don't really use.
It also doesn't really increase the blocksize, but move some data outside of the blocksize calculation to make room for more transactions. The important difference is that Segwit is opt-in and depend on usage for it's effect, while a blocksize increase would immediately increase transaction throughput to its full capacity.
> It isn't even guaranteed to increase transaction throughput which is the larger reason for wanting a block size increase.
It is guaranteed to increase transaction throughput as soon as miners activate it and wallets start producing segwit transactions.
There are currently over 20 wallets that pledged to adding segwit support, including all the most popular software ones and all 6 hardware wallet manufactures (Trezor, Ledger, KeepKey, OpenDime, BitLox and Digitalbitbox). About half of them already have their SegWit implementation ready (before segwit even activated!).
>It's not clear to me why an increase to 2 or 4MB wasn't done though. Perhaps to force the issue?
Segwit, which activates on Bitcoin soon, already increases the the block size to ~2MB. Because Bitcoin Cash doesn't implement Segwit, increasing to merely the same size as Bitcoin wouldn't have been compelling.
Its a commonly voiced, but false dichotomy - increasing blocksize 4x or even 8x will have negligible impact on the security or decentrality of Bitcoin.
It will just allow more transactions through, which will allow fees to fall to levels where normal people can use it for normal transactions.
The use of bitcoin as a currency is being throttled by the artificially small blocksize.
We can have more than 3 transactions per second, while still maintaining the fundamental properties that are Bitcoin.
Hopefully SegWit2X will be the sane compromise everyone hopes for - ie. that we see the bottleneck alleviated with segwit and 1MB blocksize contributing to a near 4x improvement in transaction throughput, and lower usage fees.
Also Segwit will only give us the block size increase when people transfer their existing bitcoin funds to new Segwit wallets. This is expected to be a slow process so any block size gains probably will be too gradual to be helpful.
The Lightning Network is not bitcoin. The people who are fighting against block size increase are the same people pushing segwit. While there is nothing inherently wrong with segwit and the Lightning Network, they are ignoring the most important improvement to Bitcoin itself, which would be a pure block size increase.
Still limited to 1 000 000 bytes per 10 minutes and then some for segwit which was a unnecessary hack job that actually makes blocks bigger without much added throughput.
Can’t tell if this is sarcasm to be honest. SegWit added a small amount of breathing room without changing block sizes. Not increasing block sizes lead to the split into BCH, then further into BCHABC and BCHSV. The core decided to switch to lightning as the scaling strategy which is still capped at 7 tx/s which still requires 68 years to open and close a single channel for everyone on earth, net new births of course.
reply