How much is 1 byte, kilobyte, megabyte, gigabyte, etc.?

What is Relictum project?

What is Relictum project?
Relictum Pro is an endless distributed registry with a developed system of smart contracts, describing (formalizing) any event in human life, ranging from buying and selling goods and services, recording logistic events, to tracking copyright and inter-acting with legal entities, including a number of self-executing transactions (smart contracts) in any field of activity
Relictum Pro is an endless distributed registry with a developed system of smart contracts describing more than 80% of significant events in a person’s daily life
Currently, modules and smart contracts have been developed, the platform is undergoing full-scale testing and we have achieved the following results:

100,000 transactions per second - Have been reached in real time at the moment

1,000,000 transactions per second— The estimated network performance

When transactions not only get to the network but also blocks are written to each node when they return* — Results by _Testnet;
  • Own added modification of SHA1-based hashing algorithm;
  • No problems of consensus (there are no problems and issues related to the solution of various ambiguities such as collisions, double waste, etc.);
  • А block hash collision may occur in 100 years, due to the continuous numbering of each block in Master_Chain;
  • SizeSize (weight) of the node ranging from 120 to 300 bytes;
  • according to calcula-tions, in 20 years the registry weight can reach ~ 1 GB if you work with the bitcoin mode intensity;
  • Full-featured real nodes in smartphones. That is in favor of full decentralization – a full distributed registry independent of third-party servers and service
https://preview.redd.it/w3staxycubu41.jpg?width=1080&format=pjpg&auto=webp&s=2ea066a594c0a71119411d39b9c81bd3cef1685f
submitted by IvanchinWas to CryptoMarkets [link] [comments]

A Deeper Dive Into Simplified Payment Verification's Fascinating History

I posted this as a comment in a thread yesterday, and received more than one message suggesting that I make it a stand-alone post. So here goes....
The fundamental idea behind simplified payment verification (SPV) was first explained in Section 8 of the bitcoin white paper: if Alice pays Bob with a bitcoin transaction, Bob requires very little additional information from the network to verify that the transaction was included in the blockchain. Bob needs only two things: (1) he needs to know the longest chain of block headers (which requires downloading 80 bytes every ten minutes), and (2) he needs to know the path through the Merkle tree that links the transaction he received to the root hash embedded in the block header (about 320 bytes of information for a block with 1000 transactions). See this video (starting at t = 18:18) for more information.
Satoshi's SPV idea is simple and brilliant. But the details get messy when implemented in the real world. The first messy detail is how Bob actually knows that Alice sent him a bitcoin transaction in the first place. Alice could directly deliver the transaction to Bob, for example via NFC or by sending it to Bob's IP address. But today this isn't what usually happens. What happens instead is that Alice broadcasts the transaction to a few random nodes on the bitcoin network, these nodes in turn "gossip" the transaction to other nodes, et cetera, until all nodes on the network are aware of the new transaction. Alice trusts that Bob will eventually hear about the transaction though this gossip process.
This brings up the first obstacle to SPV. The only way Bob can be sure to learn of the payment is to listen to every transaction broadcast on the bitcoin network. This obviously requires a lot of data (full-node level bandwidth!), which defeats the purpose of SPV in the first place.
One way to solve this bandwidth obstacle is for Bob's wallet to register his address with a full node and ask it to forward him any transactions that pay him. Later, when his transaction is confirmed in a block, the same node can also forward Bob the Merkle branch proof he requires to verify for himself that the payment he received was confirmed in the blockchain.
Easy right? Bob can trustlessly verify that he was indeed paid with only a few SMS-text-messages worth of data. So what's the problem?
The problem is that Bob is leaking privacy information. The node that provides Bob information about his transaction knows that he (or rather the entity at his IP address) cares about these transactions. Information about which transactions Bob is interested in is valuable to certain companies/agencies and is potentially harmful to Bob if leaked.
The Bitcoin developers (e.g., Mike Hearn) came up with a clever solution to improve privacy: BIP37 Bloom filters. The idea behind BIP37 is that rather than registering Bob's addresses with a full node, Bob registers a Bloom filter with the full node instead. The Bloom filter is crafted by Bob's wallet so that all of the transactions Bob cares about get picked up by the filter, but some transactions that Bob doesn't care about also get picked up by the filter, thereby confusing the node as to which transactions are really Bob's. BIP37 allows Bob to "tune" the filter to be very private (i.e, to send Bob his transactions and LOTS of other random transactions) or highly selective (i.e., to send Bob his transactions and just a few other random transactions). We see here that there appears to be a bandwidth-versus-privacy trade-off with SPV.
I still think BIP37 is great, but history has shown that it doesn't provide as much privacy as originally intended. The privacy problem with BIP37 is subtle and is due to the fact the "addresses" are so prominent in the user experience today. Every time Bob uses bitcoin to get paid he typically specifies a new address to the payer. Hopefully, this address is only paid once, but maybe Alice decides to pay Bob a second time using the same address. And so Bob wants to constantly monitor every address his wallet has every created for new incoming transactions. This means the Bloom filters he registers with full nodes are constantly growing and changing. Due to the way BIP37 is used in practice, it is possible for a node to determine specifically which addresses are Bob's from a series of these Bloom filters. We can fix this problem somewhat, but until we fully abstract "addresses" away from the user experience and make them truly "single use," I think this will always be a bit of an issue. Tom Zander (u/ThomasZander) probably has more to say on this topic.
BRD is an example of a SPV wallet that uses BIP37.
BIP157/158 took a new approach to SPV, as part of the LN efforts, and which our own Chris Pacia (u/Chris_Pacia) has contributed to and built upon. BIP157/158 turns BIP37 on its head: rather than the SPV wallet registering a filter with a node, the node provides a filter to the SPV wallet of all the transactions it is aware of, e.g., in a given block. If the SPV wallet sees that the filter contains transactions that Bob cares about, then the SPV wallet can download the complete block from a different node. The wallet then builds the Merkle proof itself (from the downloaded block) to verify that the transaction was indeed included in the blockchain. With this technique, there is no privacy information leaked at all. But we see the bandwidth-versus-privacy trade-off once again: we've improved Bob's privacy but now his wallet is downloading complete blocks every once and a while. This obviously isn't efficient if we imagine a future with 10 GB blocks!
Neutrino is an example of a wallet that uses BIP157/158.
Lastly, I'll say something about Electrum servers, although I really haven't studied them enough to speak as an authority on this topic.
Firstly, I don't think it is correct to say "'true' SPV doesn't need a server but Electrum wallets do." All SPV wallets need a server, it's just that with a wallet like BRD a run-of-the-mill Satoshi client can act as the "server." But, remember, this is only the case because BIP37 was added to the Satoshi client! We could imagine a future where BU adds Electrum-server functionality but ABC doesn't. Now is u/jonald_fyookball's Electron Cash a "real" SPV wallet or not? The answer doesn't really matter because it's a bad question to ask in the first place. In the future, we're going to see the services offered by full nodes diverge, with perhaps some providing BIP37, some providing full Electrum features, and some doing totally new things. So this idea that Electron Cash relies on a "server" while BRD doesn't is a bad way to look at things in my opinion (they both need a server). What is important instead is the trade-offs made by the particular SPV-wallet solution (e.g., in terms of bandwidth-vs-privacy, and other trade-offs).
A second comment I'll add is that adding the features of an Electrum server to a mainstream Satoshi client would probably be controversial. Understand that there is a not-insignificant faction of people who'd love to revert even BIP37! I'd bet that Core would never in a million years add Electrum functionality, I'd be surprised if ABC would implement it, while I'd be surprised if BU wouldn't implement it, at least as an option. AFAIK, Electrum is a much greater privacy leak because SPV wallets directly ask for the Merkle branch proof they are interested in, and so it is much easier for an Electrum server to figure out which addresses belong to which users.
I hope this post was informative to some readers.
Relevant comment from Tom Zander: https://www.reddit.com/btc/comments/aubq4x/bitcoin_cash_spv_wallet_options/ehb7ghj/
Link to preview of Chris Pacia's Neutrino-based wallet: https://twitter.com/ChrisPacia/status/1100251375366217728?s=19
submitted by Peter__R to btc [link] [comments]

You Can Now Prove a Whole Blockchain With One Math Problem — Really

You Can Now Prove a Whole Blockchain With One Math Problem — Really


Article by Coindesk: William Foxley
The Electric Coin Company (ECC) says it discovered a new way to scale blockchains with “recursive proof composition,” a proof to verify the entirety of a blockchain in one function. For the ECC and zcash, the new project, Halo, may hold the key to privacy at scale.
A privacy coin based on zero-knowledge proofs, referred to as zk-SNARKs, zcash’s current underlying protocol relies on “trusted setups.” These mathematical parameters were used twice in zcash’s short history: upon its launch in 2016 and first large protocol change, Sapling, in 2018.
Zcash masks transations through zk-SNARKs but the creation of initial parameters remains an issue. By not destroying a transaction’s mathematical foundation — the trusted setup — the holder can produce forged zcash.
Moreover, the elaborate ‘ceremonies‘ the zcash community undergoes to create the trusted setups are expensive and a weak point for the entire system. The reliance on trusted setups with zk-SNARKs was well known even before zcash’s debut in 2016. While other research failed to close the gap, recursive proofs make trusted setups a thing of the past, the ECC claims.

Bowe’s Halo

Speaking with CoinDesk, ECC engineer and Halo inventor Sean Bowe said recursive proof composition is the result of years of labor — by him and others — and months of personal frustration. In fact, he almost gave up three separate times.
Bowe began working for the ECC after his interest in zk-SNARKs was noticed by ECC CEO and zcash co-founder Zooko Wilcox in 2015. After helping launch zcash and its first significant protocol change with Sapling, Bowe moved to full-time research with the company.
Before Halo, Bowe worked on a different zk-SNARK variant, Sonic, requiring only one trusted setup.
For most cypherpunks, that’s one too many.
“People we are also starting to think as far back as 2008, we should be able to have proofs that can verify other proofs, what we call recursive proof composition. This happened in 2014,” Bowe told CoinDesk.

Proofs, proofs and more proofs

In essence, Bowe and Co. discovered a new method of proving the validity of transactions, while masked, by compressing computational data to the bare minimum. As the ECC paper puts it, “proofs that are capable of verifying other instances of themselves.”
Blockchain transaction such as bitcoin and zcash are based on elliptic curves with points on the curve serving as the basis for the public and private keys. The public address can be thought of the curve: we know what the elliptic curve looks like in general. What we do not know is where the private addresses are which reside on the curve.
It is the function of zk-SNARKs to communicate about private addresses and transactions–if an address exists and where it exists on the curve–anonymously.
The secp256k1 elliptic curve, used for bitcoin and ethereum via Hackernoon
Bowe’s work is similar to bulletproofs, another zk-SNARK that requires no trusted setup. “What you should think of when you think of Halo is like recursive bulletproofs,” Bowe said.
From a technical standpoint, bulletproofs rely on the “inner product argument,” which relays certain information about the curves to one another. Unfortunately, the argument is both very expensive and time consuming compared to your typical zk-SNARK verification.
By proving multiple zk-SNARKs with one–a task thought impossible until Bowe’s research–computational energy is pruned to a fraction of the cost.
“People have been thinking of bulletproofs on top of bulletproofs. The problem the bulletproof verifier is extremely expensive because of the inner product argument,” Bowe said. “I don’t use bulletproofs exactly, I use a previous idea bulletproofs are built on.”
In fact, Bowe said recursive proofs mean you can prove the entirety of the bitcoin blockchain in less space than a bitcoin blockhead takes — 80-bytes of data.

The future of zcash

Writing on Twitter, Wilcox said his company is currently studying the Halo implementation as a Layer 1 solution on zcash.
Layer 1 solutions are implementations into the codebase constituting a blockchain. Most scaling solutions, like bitcoin’s Lightning Network, are Layer 2 solutions built on top of a blockchain’s state. The ECC’s interest in turning Halo into a Layer 1 solution speaks to the originality of the discovery as it will reside next to code copied from bitcoin’s creator himself, Satoshi Nakamoto.
ECC is exploring the use of Halo for Zcash to both eliminate trusted setup and to scale Zcash at Layer 1 using nested proof composition.
— zooko (@zooko) September 10, 2019
Since the early days of privacy coins, scaling has been a contentious issue: with so much data needed to mask transactions, how do you grow a global network?
Bowe and the ECC claim recursive proofs solve this dilemma: with only one proof needed to verify an entire blockchain, data concerns could be a thing of the past:
“Privacy and scalability are two different concepts, but they come together nicely here. About 5 years ago, academics were working on recursive snarks, a proof that could verify itself or another proof [and even] verify multiple proofs. So, what [recursive proof composition] means is you only need one proof to verify an entire blockchain.”
To be sure, this isn’t sophomore-level algebra: Bowe told CoinDesk the proof alone took close to nine months of glueing various pieces together.

A new way to node

A further implication of recursive proofs is the amount of data stored on the blockchain. Since the entire ledger can be verified in one function, onboarding new nodes will be easier than ever, Bowe said.
“You’re going to see blockchains that have much higher capacity because you don’t have to communicate the entire history in one. The state chain still needs to be seen. But if you want to entire the network you don’t need to download the entire blockchain.”
While state chains still need to be monitored for basic transaction verification, syncing the entire history of a blockchain–over 400 GB and 200 GB for ethereum and bitcoin respectively–becomes a redundancy.
For zcash, Halo means easier hard forks. Without trusted setups, ECC research claims, “proofs of state changes need only reference the latest proof, allowing old history to be discarded forever.”
When asked where his discovery ranks with other advancements, Bowe spoke on its practicality:
“Where does this stand in the grand scheme of things in cryptocurrency? It’s a cryptographic tool to compress computation… and scale protocols.”
Rubix cube image via Shutterstock
submitted by GTE_IO to u/GTE_IO [link] [comments]

Can we talk about sharding and decentralized scaling for Raiblocks?

Introduction
This essay contains a healthy dose of math sprinkled with opinion, and I would be the first to admit that my math and personal opinions are sometimes wrong. The beauty of these forums is that it allows us to discuss topics in depth, and with enough group scrutiny we should arrive at the truth. I'm actually a cryptocurrency noob; I've only been looking at it in earnest for a few months, but I've seen enough to conclude that we are in the middle of a revolution, and if I don't intellectually participate somehow, I think I'll regret it for the rest of my life.
Here I analyze sharding in a PoS (proof-of-stake) system, and I will show that not only is sharding good, but I will quantify just how beneficial it is to Tps (transactions per second of the whole network) and mps (messages per second processed by each individual node). I use Raiblocks as my point of departure, regarding it as both my inspiration and my object of critique. But much of the discussion should be relevant to any PoS sharded system.
As you may know, Raiblocks does not employ ledger sharding, but seeing as every wallet is already in its own separate blockchain, it's basically already half-way there! From an engineering perspective, sharding is low-hanging fruit for a block-lattice structure like Raiblock's, especially when you compare it to how complicated it is for single-blockchain currencies.
For the record, I think that Raiblocks will scale just fine according to the current strategy laid out by Colin LeMahieu (u/meor) . By using only full nodes and hosting them in enterprise grade servers (basically datacenters), chances are good that the network will be able to keep up with future Tps (transaction per second) growth. Skeptics have been questioning if people are going to be willing to run nodes pro bono, just to support the network. But I don't doubt that many vendors will jump at the chance. If I'm Amazon, and I've been paying 3% of everything to Visa all these years, when there's an option to basically run my own Visa, I take it.
Payment networks like Paypal have been offering free person-to-person payments for years, eating the costs of processing those transactions in exchange for the opportunity to take their cut when those same people pay online vendors like Amazon. This makes business sense because only a minority of transactions are person-to-person anyway. Most payments result from people buying stuff. So, in a sense, vendors like Amazon have already been subsidizing our free transactions for years. By running Raiblocks nodes, they would still be subsidizing our transactions, but it would be a better deal than what they were getting before.
But have we forgotten something here? Is this really the dream of the instant, universal, decentralized, uncensorable payment network that was promised and only kinda delivered by Bitcoin? Decentralization comes in a spectrum, and while this is certainly better than a private blockchain like Ripple, the future of Raiblocks that we're looking at is a smallish number of supernodes run by a consortium of corporations, governments, and maybe a sprinkling of die-hard fans.
You may ask, but what about the nodes run by you and me on our dinky home computers and cable modem connections? Well, people need to remember that Raiblocks nodes need to talk to each other every time there's a transaction, in order to exchange their votes. The more nodes there are, the more messages have to be received and sent per node per transaction. Having more nodes may improve the decentralization, redudancy, and robustness of the network, but speed it definitely does not. Sure, the SSD of a computer running a mock node will handle 7000 tps, but the real bottleneck is network IO, not disk IO, and how many Comcast internet plans are going to keep up with 7000 x N messages per second, where N is the total number of nodes? If you take the message size to be 260 bytes (credit to u/juanjux's packet-sniffing skills), and the number of nodes to be 1000, that's 1.8 GB/s. Also, if you consider that at least two messages will need to be exchanged with every node (one for the sending wallet, one for the receiving), the network requirements per node becomes 3.6 GB/s. This requirement applies to both the download and upload bandwidth, since in addition to receiving votes from other nodes, you have to announce your own vote to all of them as well. Maybe with multicasting upload requirements can be relaxed, but the overall story is the same: you almost want to convince small players not to run their own nodes, so N doesn't grow too large. Hence, the lack of dividends.
So, if we're resigned to running Raiblocks from corporate supernodes in the future, we might want to ask ourselves, why is decentralization so important anyway? For 99.9% of the cases, I actually think it won't matter. People just want their transactions to complete in a low-cost and timely fashion. And that's why I think Ripple and Raiblocks on their current trajectories have bright futures. They are the petty cash of the future. But for bulk wealth storage, you want decentralization because it makes it hard for any one entity to gain control over your money. No government will be able to step in and freeze your funds if you're Wikileaks or a political dissident when your cryptocurrency network is hosted on millions of computers scattered across the internet. I know the millions number sounds outlandish given that Bitcoin itself has fewer than 12k nodes at present, but that's my vision for the future. And I hope that by the end of this essay, you'll agree it's plausible.
The main benefit of sharding is that it allows nodes to divide the task of hosting the ledger into smaller chunks, reducing the per-node bandwidth requirements to achieve a certain Tps. I'll show that this benefit comes without having to sacrifice ledger redundancy, so long as sufficient nodes can be recruited. One disadvantage that must be noted is the increased overhead of coordinating a large number of nodes subscribed to partial ledgers. At the very least, nodes will need to know how wealthy other nodes are for voting purposes. However, I don't see how an up-to-the-second update of nodal wealth is necessary, since wealth changes on the timescale of months, if not years. It should be sufficient to conduct a role call once every few weeks to update nodes on who the other nodes are and to impart information about wealth and ledger subscriptions. Nonetheless, in principle this overhead means it is still possible to have too many nodes even with sharding.
Raiblocks has a unique advantage over single-chain cryptocoins in that each wallet address is already its own blockchain. This makes it especially amenable to sharding, since each wallet can already be thought of as its own shard! You just need a clever algorithm to decide which nodes subscribe to which wallets. For the purposes of this analysis, I assume a random subscription, so that for example if both you and I subscribe to 10% of the ledger, our subscriptions are probabilistically independent, and we intersect on roughly one percent of the total wallet space. I will also assume that all nodes are identical to each other in bandwidth, though in practice I think each node's owner should decide how much bandwidth he is willing to commit, letting the node's software dynamically adjust its P to maintain the desired bandwidth, where P, or the participation level, is the fraction of the ledger that the node is subscribed to. That way, when the Tps of the network increases over time, each node will use the increasing bandwidth demand as a feedback signal to automatically lower its ledger subscription percentage. Then, all that would be missing for smooth and seamless network growth is a mechanism for ensuring node count growth.
 
Some math
Symbol Definition
mps messages per second received/sent per individual node
N total number of nodes
Tps transactions per second processed by the whole network
R ledger redundancy
P fractional participation level of an individual node
k role call frequency
From the definitions, it should be apparent that
(1) R = NP
There are two types of messages that nodes have to deal with, transaction messages and role-call messages. Transaction messages are those related to updating the ledger when money is sent from one wallet to another. For each transaction, each node presiding over the sending wallet/shard will need to
  1. Broadcast its vote to the other R members of the shard. In the normal case this is a thumbs up signal and no conflict resolution is required.
  2. Receive votes from the other R members of the shard
  3. Broadcast its thumbs up to the R members of the receiving wallet/shard
Each node presiding over the receiving wallet/shard will need to
  1. receive thumbs up signals from the R members of the sending wallet/shard
Therefore, on a macro level upload and download requirements are the same. (Two messages sent, two messages received.)
Role-call messages are those related to disseminating an active directory of which nodes are participating in which wallets and information about nodal wealth. Knowledge about each individual node is broadcasted to the network at a rate of k. I think 10-6 Hz is reasonable, for an update interval of 12 days. For each update, all R nodes presiding over the wallet of the node whose information is being shared will broadcast their view of the node's wealth to all N nodes. Therefore, from the perspective on an individual node:
  1. The rate that role-call messages are received is kRN.
  2. The rate that role-call messages are sent is k(# node wallets presided over)N = k(NP)N = kRN.
Again, upload and download rates are the same. Since upload and download rates are symmetric (which intuitively should be true since every message that is sent needs to be received), the parameter mps can be used equally to describe upload and download bandwidth.
(2) mps = 2R(PTps) + kRN,
where the two terms correspond to the transaction and role-call messages, respectively. Using (1), (2) can be rewritten as
(3) mps = 2R2Tps/N + kRN
Here, we see an interesting relationship between the different message categories and the node count. For a fixed ledger redundancy R and Tps, the number of transaction messages is inversely proportional to the number of nodes. This is intuitive. If all of a sudden there are twice as many nodes and ledger redundancy remains the same, then each node has halved its ledger subscription and only has to deal with half as many transactions. This is the "many hands make light work" phenomenon in action. On the other hand, the number of role-call messages increases in proportion to the number of nodes. The interplay between these two factors determines the sweet spot where mps is at a local minimum. Since the calculus is straightforward, I'll leave it as an exercise to the reader to show that
(4) N_sweetspot = (2RTps/k)1/2
Alternatively, another way of looking things is to consider mps to be fixed. This may be more appropriate if each node is pegged at its committed bandwidth. Then (3) describes the relationship between the ledger redundancy and N. You may ask how this can be reconciled with (1), which seems to imply that N and R are directly proportional, but in this scenario each node is dynamically adjusting its ledger subscription P in response to a changing N to maintain a constant bandwidth mps. In this view, the sweet spot for N is where R is maximized. Interestingly, regardless of which view you take, you arrive at the same expression for the sweet spot (4).
If N < N_sweetspot, then transaction messages dominate the total message count. The system is in the transaction-heavy regime and needs more nodes to help carry the transaction load. If N > N_sweetspot (the node-heavy regime), transaction messages are low, but the number of role-call messages is large and it becomes expensive to keep the whole network in sync. When N = N_sweetspot, the two message categories occur at the same rate, which is easily verified by plugging (4) back into (3). This is when the network is at its most decentralized: message count per node is low while redundancy is high.
Note that N_sweetspot increases as Tps1/2. This implies that, as transaction rate increases, the network will not optimally scale without somehow attracting new people to run nodes. But the incentives can't be too good either, or N may increase beyond N_sweetspot. Ideally, a feedback mechanism using market forces will encourage the network to gravitate towards the sweet spot (more on this later).
One special case is where P=1 and N=R. This is when the network is at its most centralized operating point, with every single node acting as a full node. This minimizes node count for a given redundancy level R and is how Raiblocks is currently designed. I will show that for most real-world numbers, the role-call term is so small as to be negligible, but the mps is many orders of magnitude higher than in the decentralized case because of the large transaction term.
Assuming that we are able to keep the network operating at its sweet spot, by plugging (4) into (3), we arrive at
(5) mps_sweetspot = R3/2(8kTps)1/2
If instead we plug N=R into (3), we arrive at
(6) mps_centralized = 2RTps + kR2
So, we see that in the decentralized case the mps of individual nodes increases as the square root of Tps, a much more sustainable form of scaling than the linear relationship in the centralized case.
And now, the moment we've all been waiting for: plugging various network load scenarios into these formulas and comparing the most decentralized case to the most centralized. Real world operation will be somewhere in between these two extremes.
Fixed parameters
packet size (bytes) 260
k (Hz) 1.00E-06
R 1000
transaction fee ($) $0.01
Tps
0.1 1 10 100 1,000 10,000 100,000
Total monthly dividends $2,592 $25,920 $259,200 $2,592,000 $25,920,000 $259,200,000 $2,592,000,000
Decentralized node requirements
mps (Hz) 28 89 283 894 2,828 8,944 28,284
node traffic (bytes/s) 7.35E+03 2.33E+04 7.35E+04 2.33E+05 7.35E+05 2.33E+06 7.35E+06
N 1.41E+04 4.47E+04 1.41E+05 4.47E+05 1.41E+06 4.47E+06 1.41E+07
P 7.07E-02 2.24E-02 7.07E-03 2.24E-03 7.07E-04 2.24E-04 7.07E-05
Total Network Traffic (bytes/s) 1.04E+08 1.04E+09 1.04E+10 1.04E+11 1.04E+12 1.04E+13 1.04E+14
Yearly Network Traffic (bytes) 3.28E+15 3.28E+16 3.28E+17 3.28E+18 3.28E+19 3.28E+20 3.28E+21
Decentralized node income
monthly per node ($) $0.18 $0.58 $1.83 $5.80 $18.33 $57.96 $183.28
income/GB ($/GB) $0.0096 $0.0096 $0.0096 $0.0096 $0.0096 $0.0096 $0.0096
Centralized node requirements
mps (Hz) 2.01E+02 2.00E+03 2.00E+04 2.00E+05 2.00E+06 2.00E+07 2.00E+08
node traffic (bytes/s) 5.23E+04 5.20E+05 5.20E+06 5.20E+07 5.20E+08 5.20E+09 5.20E+10
N 1000 1000 1000 1000 1000 1000 1000
P 1 1 1 1 1 1 1
Total Network Traffic (bytes/s) 5.23E+07 5.20E+08 5.20E+09 5.20E+10 5.20E+11 5.20E+12 5.20E+13
Yearly Network Traffic (bytes) 1.65E+15 1.64E+16 1.64E+17 1.64E+18 1.64E+19 1.64E+20 1.64E+21
Centralized node income
monthly per node ($) $2.59 $25.92 $259.20 $2,592 $25,920 $259,200 $2,592,000
income/GB ($/GB) 0.0191 0.0192 0.0192 0.0192 0.0192 0.0192 0.0192
Yes, I did sneak a transaction fee in there, which is anathema to the Raiblocks way. But I wanted to incentivize people to run nodes. Observe that income per gigabyte remains the same, independent of network Tps, because both total income and total bandwidth scale proportionally to Tps. The decentralized case has half the income/GB because the role-call overhead doubles network activity. In either case, the income per GB depends on transaction fee and is independent of network load.
An interesting number to check online is the price/GB that various ISP's charge. With Google Fiber, it is possible to purchase bandwidth as low as $0.00076 per GB, meaning that it may be possible for nodes to be profitable even if fees were lowered by another order of magnitude. As time progresses, bandwidth costs will only go down, so fees may be able to be lowered even further past that. But because of electricity and other miscellaneous costs, I think a one cent transaction fee is probably pretty close to what people need to incentivize them to run nodes.
With sharding, even many home broadband connections today can feasibly support 100,000 transactions per second, with each node subscribed to about one ten thousandth of the total ledger and handling about 7 MB/s. Getting 14 million people to run nodes may seem like a tall order, but the financial incentives are there. Just look at all the people who have rushed to do GPU mining. Here, bandwidth replaces hashing power as the tool used for mining.
According to a study done by Cisco, yearly internet traffic is projected to reach 3.3 ZB by 2021. Looking at the table, that means if we ever reach 100,000 Tps, Sharded Raiblocks traffic would be equal to the rest of the world combined. Yikes! But if you think about it, nobody along the way is taking on an unbearable load. Users pay low fees for transactions. Nodes get dividends. ISPs get additional customers. The only ones who lose out are Visa, Paypal, and banks.
With such a large network presence, the cultural impact of this coin would be huge. That, in addition to the sheer number of participants running nodes as side businesses would cement this as the coin of the people.
From a macro level, I see no red flags that would indicate this is economically or technically infeasible. Of course, the devil's in the details so I'm posting this to see if people think I'm on the right track. To me, it seems that the possibilities are tantalizing and someone needs to build a test net to see if this idea flies (u/meor, if any of this sounds appealing, are you guys hiring? ;) ).
Musings
I've only scratched the surface and there are many other topics that are worthy of deeper discussion:
submitted by Cookiemole to RaiBlocks [link] [comments]

HPB (High-Performance Blockchain) Whitepaper breakdown

If you'd like to read the first article I published on reddit on HPB, please take a look here
https://redd.it/7qt54x
 
People often skim over white papers as they simply cannot be bothered to read through them. Let’s be honest, most of them are as dull as dishwater and even more so when full of technical blockchain related buzzwords that most people new to cryptocurrencies simply don’t understand.
 
Well as someone now invested in High-Performance Blockchain (HPB), I want to know and understand what the company is trying to achieve, so I’ve spent some time dissecting the white paper and actually gathering the information behind the buzzwords to determine if the company offers real key differentiators and unique selling points that allow the proposal to stand separately from the competition.
 
So here is my breakdown of some of the key sections from the soon-to-be-updated HPB whitepaper
 
TPS
 
Ok so TPS stands for “transactions per second” and is reasonably well recognised in the world of blockchain but often misunderstood or under-appreciated. Essentially HPB are stating in their white paper that TPS is a bottleneck for all current blockchain solutions and this bottleneck restricts development and simply will not meet future business needs.
 
So let’s just explore this for a minute. Anyone who knows Bitcoin and Ethereum and have tried to transfer their coins from a wallet to an exchange or vice-versa, may at some point have experienced slow transfer or “transaction” times. This is usually when the network is congested, and transactions which usually take a few minutes, are suddenly slowed down considerably. Let's say you are transferring some Eth to an online exchange to buy another coin as you’ve noticed that this other coins price is dropping, and you want to catch the low price to buy in before the bounce……so you setup the transfer, increase your Ether Gwei to 50 to get things moving quicker, and then you wait for your 12 block confirmations to be confirmed before the Eth appears in your exchange wallet. You wait 10-15 minutes and the Eth suddenly appears, only to find the price as already bounced on the coin you wanted to buy and it’s already up 10% on what it happened to be 15 minutes ago! That delay has just cost you $500!
 
Delay can be extremely frustrating, and can often be expensive. Now whilst individuals tend to tolerate slight delays on occasion, (for now!) It will simply be unacceptable moving forward. Imagine typing in your pin at a cashpoint/ATM and having to wait 4-5 minutes to get your money! No way!
 
So TPS is important….in fact it’s more than important, it’s fundamental to the success of blockchain technology that TPS speeds improve, and blockchain networks scale accordingly! So how fast are current TPS rates of the big crypto’s?
 
Here is the estimated TPS of the Top 10 cryptos. I should point out that this is the CURRENT TPS speed. Almost all of the cryptos mentioned have plans in the pipeline to scale up and improve TPS using various ingenious solutions, but as of TODAY this is the average speed.
 
  1. Bitcoin ~7 TPS
  2. Ethereum ~15 TPS
  3. Ripple ~1000 TPS
  4. Bitcoin Cash ~40 TPS
  5. Cardano ~10 TPS
  6. Litecoin ~56
  7. Stellar ~3700
  8. NEM ~4 TPS
  9. EOS ~0 TPS
  10. NEO ~1000 TPS
 
Like I say, almost all of these have plans to increase transaction speed and plans to address scalability, but these are the numbers I have researched as of this particular moment in time.
 
Let’s compare this to Visa, the global payment processor, which has an “average” daily peak of around 4,500 TPS and is capable of 56,000 TPS.
 
Some of you may say, “Well that doesn’t matter, as in a few months’ time [insert crypto I own here] will be releasing [insert scalability plan of my crypto here] which means it will be capable of running [insert claimed future TPS speed of my crypto here] so my crypto will be the best in the world!”
 
But this isn’t the whole story….. far from it. You see this doesn’t address a fundamental element of blockchain…..and that is the PHYSICAL transference of information from one node to another to allow for block validation and consensus. You know….the point where the data processed moves up and down the OSI stack and hits the physical layer on the network card and gets transported through the physical Ethernet cable or fibre that takes it off to somewhere else.
 
Also, you have to factor in the actual transaction size (measured in bytes or kilobytes) that is being transferred. VISA transactions vary in size from about 0.2 kilobytes to a little over 1 kilobyte. In order to maintain 4500 TPS, and if we use an average of 0.5kb (512bytes) per transaction, then you need to be physically transporting approximately 2.25mb of data per second. OK so this seems tiny! We all have 100mb broadband at home and the NIC network cards in your computers are capable of running 10gb ….so 2.25mb is nothing…… for now!
 
If we go back to actual blocks on the blockchain, let’s first look at bitcoin. It has a fixed 1mb block size (1,000,000 bytes) so if bitcoin TPS is at around 7 TPS, then we need to be physically transporting 6.83mb per second per block. Still pretty small and easy to cope with….Well if that’s the case then why is bitcoin so slow?
 
Well if you consider the millions of transactions being requested every day, and that you can only fit 1mb of data into a single block, then if you imagine the first block in the queue gets processed first (max 1mb of data), but the rest of the transactions have to wait, to see if they hopefully are in the next block, or maybe the next one? Or maybe the next one? Or the next one?
 
Now the whole point of “decentralization” is that every node on the blockchain network is in agreement that the block is valid…this consensus typically takes around 10 minutes for the blockchain network to fully “sync” on the broadcasted block. Once the entire network is in agreement, they start to “sync” the next block. Unfortunately if your transaction isn’t at the front of the queue, then you can see how it might take a while for your transaction to get processed. So is there a way of moving to the front of the queue, similar to the way you can get a “queue jump pass” at a theme park? Sure there is….you can pay as higher-than-average transaction fee to get prioritized….but if the transaction fees are relative to the cryptocurrency itself, then the greater the value of the crypto becomes (i.e. the more popular it becomes), the higher the transaction fee becomes in order to allow you to make your transactions.
 
Once again using the cashpoint ATM analogy, it’s like going to withdraw your money, and being presented with some options on screen similar to that of, “You can have your money in less around 10 minutes for $50, or you can wait 20 minutes for $20, or you can camp out on the street and wait until tomorrow and get your money for $5”
 
So it’s clear to see the issue…..as blockchain scales up and more people use it, the value of it rises, the cost to use it goes up, and the speed of actually using it gets slower. This is not progress, and will not be acceptable as more people and businesses use blockchain to transact with money, information, data, whatever.
 
So what can be done? …Well you could increase the block size……more data held in a block means that you have a greater chance of being in a block at the front of the queue……Well that kind of works, but then you still have to factor in how long it takes for all nodes on the blockchain network to “sync” and reach consensus.
 
The more data per block, the more data there is that needs to be fully distributed.
 
I used visa as an example earlier as this processes very small amounts of transactional data. Essentially this average 512 bytes will hold the following information: transaction amount, transaction number, transaction date and time, transaction type (deposits, withdrawal, purchase or refund), type of account being debited or credited, card number, identity of the card acceptor (organization/store address) as well as the identity of the terminal (company name from which the machine operates). That’s pretty much all there is to a transaction. I’m sure you will agree that it’s a very small amount of data.
 
Moving forward, as more people and businesses use block-chain technology, the information transacted across blockchain will grow.
 
Let’s say, (just for a very simplistic example) that a blockchain network is being used to store details on a property deed via an Ethereum Dapp, and there is the equivalent of 32 pages of information in the deed. Well one ascii character stored as data represents one byte.
 
This “A” right here is one byte.
 
So if an A4 page holds let’s say 4000 ascii characters, then that’s 4000 bytes per page, or 4000x32= 128,000 bytes of data. Now if a 1mb block size can hold 1,000,000 bytes of data, then my single document alone has just consumed (128,000/1,000,000)*100 = 12.8% of a 1mb block!
 
Now going further, what if 50,000 people globally decide to transfer their mortgage deeds? Alongside those are another 50,000 people transferring their Will in another Dapp, alongside 50,000 other people transferring their sale-of-business documents in another Dapp, alongside 50,000 people transferring other “lengthy” documents in yet another Dapp? All of a sudden the network comes to a complete and utter standstill! That’s not to mention all the other “big data” being thrown around from company to company, city to city, and continent to continent!
 
Ok in some respects that's not really a fair example, as I mentioned the 1mb block limit with bitcoin, and we know that bitcoin was never designed to be anything more than a currency.
 
But that’s Bitcoin. Other blockchains are hoping/expecting people to embrace and adopt their network for all of their decentralized needs, and as time goes by and more data is sent around, then many (if not all) of the suggested scalability solutions will not be able to cope…..why?
 
Because sooner or later we won’t be talking about megabytes of data….we’ll be talking about GB of data….possibly TB of data on the blockchain network! Now at this stage, addressing this level of scalability will definitely not be purely a software issue….we have to go back to hardware!
 
So…finally coming to my point about TPS…… as time goes by, in order for block chains to truly succeed, the networking HARDWARE needs to be developed to physically move the data quickly enough to be able to cope with the processing of the transactions…..and quite frankly this is something that has not been addressed…..it’s just been swept under the carpet.
 
That is, until now. High-Performance Blockchain (HPB) want to address this issue…..they want to give blockchain the opportunity to scale up to meet customer demand, which may not be there right at this moment, but is likely to be there soon.
 
According to this website from just over a year ago, more data will be produced in 2017, then in the entire history of civilization spanning 5000 years!
https://appdevelopermagazine.com/4773/2016/12/23/more-data-will-be-created-in-2017-than-the-previous-5,000-years-of-humanity-/
That puts things into perspective when it comes to data generation and expected data transference.
 
Ok so visa can handle 56,000 TINY transactions per second….Will that be enough for block chain TPS in 5 years’ time? Well I’ll simply leave that for you to decide.
 
So what are HPB doing about this? They have been developing a specialist hardware accelerated network card known as a TOE card (TOE stands for TCP/IP Offload Engine) which is capable of supporting MILLIONS of transactions per second. Now there are plenty of blockchains out there looking to address speed and scaling, and some of them are truly fascinating, and they will most likely address scalability in the short term….but at some point HARDWARE will still be the bottleneck and this will still need to be addressed like the bad smell in the room that won’t go away. As far as I know (and I am honestly happy to stand corrected here) HPB are the ONLY Company right now who see hardware acceleration as fundamental to blockchain scalability.
 
No doubt more companies will follow over time, but if you appreciate “first mover advantage” you will see how critical this is from a crypto investment perspective.
 
Here are some images of the HBP board
HPB board
HPB board running
Wang Xiaoming holding the HPB board
 
GVM (General Virtual Machine mechanism)
The HPB General virtual machine is currently being developed to allow the HPB solution to work with other blockchains to enhance them and help them to scale. Currently the GVM is being developed for the NEOVM (NEO Virtual Machine) and The EVM (Ethereum Virtual Machine) with others planned for the future.
 
Now a lot of people feel that if Ethereum were not hampered with scalability issues, then it would be THE de-facto blockchain globally (possibly outside of Asia due to things like Chinese regulation) and that NEO is the “Ethereum of China” developed specifically to accommodate things like Chinese regulation. So if HPB is working on a hardware solution to help both Ethereum and NEO, then in my opinion this could add serious value to both blockchains.
 
Claim of Union Pay partnership
To quote directly (verbatim) from the whitepaper:
After listening to the design concept of HPB, China's largest financial data company UnionPay has joined as a partner with HPB, with the common goal of technological practice and exploration of financial big data and high-performance blockchain platform. UnionPay Wisdom currently handles 80% of China's banking transaction data, with an annual turnover of 80 trillion yuan. HPB will join hands with China UnionPay to serve all industry partners, including large banks, insurance, retail enterprises, fintech companies and so on.
 
Why is this significant? Have a read of this webpage to get an idea of the scale of this company:
http://usa.chinadaily.com.cn/business/2017-10/10/content_33060535.htm
 
Now some people will say, there’s no proof of this alliance, and trust me I am one of the biggest sceptics you will come across….I question everything!
 
Now at this stage I have no concrete evidence to support HPB’s claim, however let me offer you my train of thought. Whilst HPB hasn’t really been marketed in the West (a good thing in my opinion!) The leader of HPB Wang Xiaoming is literally attending every single major Asian blockchain event to personally present his solution to major audiences. HPB also has the backing of NEO, who angel invested the project.
 
Take a look at this YouTube video of Da Hongfei talking about NEO, and bringing up a slide at the recent “BlockChain Revolution Conference” on January 18th 2018 – If you don’t want to watch the entire video (it’s obviously all about NEO) then skip forward to exactly 9m13s into the video and take a look at the slide he brings up. You will see it shows HPB. Do you honestly thing Da Hongfei, the leader of NEO, would bring up details of a company that he felt to be untrustworthy to share with a global audience?
Blockchain Revolution 2018 video
 
Here are further pictures of numerous events that HPB’s very own Wang Xiaoming has presented HPB…..in the blockchain world he is very respected after releasing multiple whitepapers and publishing several books over the years on blockchain technology. This is a “techie” with a very public profile…..this is not some guy who knows nothing about blockchain looking to scam people with a dodgy website full of lies and exaggerations!
Wang Xiao Ming presentation at Lujiazui Blockchain event
Wang Xiao Ming presenting at the BTAS2017 summit
Wang Xiao Ming Blockchain presentation
 
I won’t go into some of the other “dubious” altcoins on the markets who claim to be in bed with companies like IBM, Huwawei, Apple etc, but when you do some digging they have a registered address at a drop-mail and you can only find 3-4 baidu links about the company on the internet, you have to question their trustworthiness
 
So do I believe in HPB…..very much so :-)
 
Currently the HPB price sits at $6.00 on www.bibox.com and isn’t really moving. I believe this is due to a number of factors.
 
Firstly, the entire crypto market has gone bonkers this last week or so, although this apparently happens every January.
 
Secondly the coin is still on relatively obscure exchanges that most people have never heard of.
 
Thirdly, because of the current lack of expose, the coin trades at low volume, which means (in my opinion...I can’t actually prove it) that crypto “bots” are effectively controlling the price range as it goes up to around $9.00 and then back down to $6.00, then back up to $9.00, then back down to $6.00 and over and over again.
 
Finally the testnet proof of concept hasn’t been launched yet. We’ve been told that it’s Q1 this year, so it’s imminent, and as soon as it launches I think the company will get a lot more press coverage.
 
UPDATE - It has now been officially confirmed that HPB will be listed on Kucoin
The tentative date is February 5th
 
So, for the investors out there….. It’s trading at $6.00 per coin, and with a circulating supply of 28 million coins, it gives the company an mcap of $168,000,000
 
So what could the price go to? I have no idea as unfortunately I do not have a crystal ball in my posession….however some are referring to HPB as the EOS of China (only HPB has an actual working, hardware-focussed product as opposed to plans for the future) and EOS currently has an mcap of $8.30 billion dollars…… so for HPB to match that mcap, the price of HPB would have to effectively increase almost 50-fold to $296.4 - Now that’s obviously on the optimistic side, but even still, it shows its potential. :-)
 
I believe hardware acceleration alongside software optimization is the key to blockchain success moving forward. I guess it’s up to you to decide if you agree or disagree with me.
 
Whatever you do though…..remember that Most importantly of all…… DYOR!
 
My wallet address, if you found this useful and would like to donate is: 0xd7FAbB675D9401931CefE9E633Ef525BfBa7a139
submitted by jpowell79 to u/jpowell79 [link] [comments]

Mimblewimble in IoT—Implementing privacy and anonymity in INT Transactions

Mimblewimble in IoT—Implementing privacy and anonymity in INT Transactions

https://preview.redd.it/kyigcq4j5p331.png?width=1280&format=png&auto=webp&s=0584cd96378f51ead05b447397dcb0489995af4e

https://preview.redd.it/rfc3cw7q5p331.png?width=800&format=png&auto=webp&s=2b10b33defa0b354e0144745dd20c2f257812f29

The years of 2017 and ’18 were years focused on the topic of scaling. Coins forked and projects were hyped with this word as their sole mantra. What this debate brought us were solutions and showed us where we are right now satisfying the current need when paired with a plan for the future. What will be the focus of years to come will be anonymity and fungibility in mass adoption.
In the quickly evolving world of connected data, privacy is becoming a topic of immediate importance. As it stands, we trust our privacy to centralized corporations where safety is ensured by the strength of your passwords and how much effort an attacker dedicates to breaking them. As we grow into the new age of the Internet, where all things are connected, trustless and cryptographic privacy must be at the base of all that it rests upon. In this future, what is at risk is not just photographs and credit card numbers, it is everything you interact with and the data it collects.
If the goal is to do this in a decentralized and trustless network, the challenge will be finding solutions that have a range of applicability that equal the diversity of the ecosystem with the ability to match the scales predicted. Understanding this, INT has begun research into implementing two different privacy protocols into their network that conquer two of the major necessities of IoT: scalable private transactions and private smart contracts.

Mimblewimble

One of the privacy protocols INT is looking into is Mimblewimble. Mimblewimble is a fairly new and novel implementation of the same elements of Elliptic-Curve Cryptography that serves as the basis of most cryptocurrencies.

https://preview.redd.it/dsr6s6vt5p331.png?width=800&format=png&auto=webp&s=0249e76907c3c583e565edf19276e2afaa15ae08

In bitcoin-wizards IRC channel in August 2016, an anonymous user posted a Tor link to a whitepaper claiming “an idea for improving privacy in bitcoin.” What followed was a blockchain proposal that uses a transaction construction radically different than anything seen today creating one of the most elegant uses of elliptic curve cryptography seen to date.
While the whitepaper posted was enough to lay out the ideas and reasoning to support the theory, it contained no explicit mathematics or security analysis. Andrew Poelstra, a mathematician and the Director of Research at Blockstream, immediately began analyzing its merits and over the next two months, created a detailed whitepaper [Poel16] outlining the cryptography, fundamental theorems, and protocol involved in creating a standalone blockchain.
What it sets out to do as a protocol is to wholly conceal the values in transactions and eliminate the need for addresses while simultaneously solving the scaling issue.

Confidential Transactions

Let’s say you want to hide the amount that you are sending. One great way to hide information that is well known and quick: hashing! Hashing allows you to deterministically produce a random string of constant length regardless of the size of the input, that is impossible to reverse. We could then hash the amount and send that in the transaction.

X = SHA256(amount)
or
4A44DC15364204A80FE80E9039455CC1608281820FE2B24F1E5233ADE6AF1DD5 = SHA256(10)

But since hashing is deterministic, all someone would have to do would be to catalog all the hashes for all possible amounts and the whole purpose for doing so in the first place would be nullified. So instead of just hashing the amount, lets first multiply this amount by a private blinding factor*.* If kept private, there is no way of knowing the amount inside the hash.

X = SHA256(blinding factor * amount)

This is called a commitment, you are committing to a value without revealing it and in a way that it cannot be changed without changing the resultant value of the commitment.
But how then would a node validate a transaction using this commitment scheme? At the very least, we need to prove that you satisfy two conditions; one, you have enough coins, and two, you are not creating coins in the process. The way most protocols validate this is by consuming a previous input transaction (or multiple) and in the process, creating an output that does not exceed the sum of the inputs. If we hash the values and have no way validate this condition, one could create coins out of thin air.

input(commit(bf,10), Alice) -> output(commit(bf,9), BOB), outputchange(commit(bf,5), Alice)
Or
input(4A44DC15364204A80FE80E9039455CC1608281820FE2B24F1E5233ADE6AF1DD5, Alice) ->
output(19581E27DE7CED00FF1CE50B2047E7A567C76B1CBAEBABE5EF03F7C3017BB5B7, Bob)
output(EF2D127DE37B942BAAD06145E54B0C619A1F22327B2EBBCFBEC78F5564AFE39D, Alice)

As shown above, the later hashed values look just as valid as anything else and result in Alice creating 4 coins and receiving them as change in her transaction. In any transaction, the sum of the inputs must equal the sum of the outputs. We need some way of doing mathematics on these hashed values to be able to prove:

commit(bf1,x) = commit(bf2,y1) + commit(bf3,y2)

which, if it is a valid transaction would be:

commit(bf1,x) - commit(bf2+bf3,y1+y2) = commit(bf1-(bf2+bf3),0)

Or just a commit of the leftover blinding factors.

By the virtue of hashing algorithms, this isn’t possible. To verify this we would have to make all blinding factors and amounts public. But in doing so, nothing is private. How then can we make a valued public that is made with a private-value in such a way that you cannot reverse engineer the private value and still validate it satisfies some condition? It sounds a bit like public and private key cryptography…
What we learned in our primer on Elliptic-Curve Cryptography was that by using an elliptic curve to define our number space, we can use a point on the curve, G, and multiply it by any number, x, and what you get is another valid point, P, on the same curve. This calculation is quick but in taking the resultant point and the publically known generator point G, it is practically impossible to figure out what multiplier was used. This way we can use the point P as the public key and the number x as the private key. Interestingly, they also have the curious property of being additive and communicative.
If you take point P which is xG and add point Q to it which is yG, its resulting point, W = P + Q, is equal to creating a new point with the combined numbers x+y. So:
https://preview.redd.it/yv0knclr6p331.png?width=800&format=png&auto=webp&s=9a3abccdc164e615651147141736356013e4b829
This property, homomorphism, allows us to do math with numbers we do not know.
So if instead of using the raw amount and blinding factor in our commit, we use them each multiplied by a known generator point on an elliptic curve. Our commit can now be defined as:
https://preview.redd.it/aas2wm0u6p331.png?width=800&format=png&auto=webp&s=c3ebb5728f755f30e878ce5f1885397f6667d4f3
This is called a Pedersen Commitment and serves as the core of all Confidential Transactions.
Let’s call the blinding factors r, and the amounts v, and use H and G as generator points on the same elliptic curve (without going deep into Schnorr signatures, we will just accept that we have to use two different points for the blinding factor and value commits for validation purposes**). Applying this to our previous commitments:
https://preview.redd.it/zf246t8z6p331.png?width=800&format=png&auto=webp&s=17e2e155c59002f05f38ccb27082f79a5dd98a1f
and using the communicative properties:
https://preview.redd.it/km4fuf017p331.png?width=800&format=png&auto=webp&s=13541d62ec3f6e5728388b7a8d995c3829364a42
which for a valid transaction, this would equal:
with ri, vi being the values for the input, ro,vo being the values for the output and rco, vco being the values for the change output.

This resultant difference is just a commit to the excess blinding factor, also called a commitment-to-zero:
https://preview.redd.it/tqnwao667p331.png?width=800&format=png&auto=webp&s=9da5ecab5c670024f171a441e0d2477cf8f41a56
You can see that in any case where the blinding factors were selected randomly, the commit-to-zero will be non-zero and in fact, is still a valid point on the elliptic curve with a public key,
https://preview.redd.it/19ry9i297p331.png?width=800&format=png&auto=webp&s=4fb6628a01dc784816e1aea43cc0f5cfb025af52
And private key being the difference of the blinding factors.
So, if the sum of the inputs minus the sum of the outputs produces a valid public key on the curve, you know that the values have balanced to zero and no coins were created. If the resultant difference is not of the form
https://preview.redd.it/71mpdobb7p331.png?width=800&format=png&auto=webp&s=143d28da48d40208d5ef338444b3c7edea1fab9c
for some excess blinding factor, it would not be a valid public key on the curve, and we would know that it is not a balanced transaction. To prove this, the transaction is then signed with this public key to prove the transaction is balanced and that all blinding factors are known, and in the process, no information about the transaction have been revealed (the by step details of the signature process can be read in [Arvan19]).
All the above work assumed the numbers were positive. One could create just as valid of a balanced transaction with negative numbers, allowing users to create new coins with every transaction. Called Range Proofs, each transaction must be accompanied by a zero-knowledge argument of knowledge to prove that a private committed value lies within a predetermined range of values. Mimblewimble, as well as Monero, use BulletProofs which is a new way of calculating the proof which cuts down the size of the transaction by 80–90%.

*Average sizes of transactions seen in current networks or by assuming 2 input 2.5 output average tx size for MW

Up to this point, the protocol described is more-or-less identical between Mimblewimble and Monero. The point of deviation is how transactions are signed.
In Monero, there are two sets of keys/addresses, the spend keys, and the view keys. The spend key is used to generate and sign transactions, while the view key is used to “receive” transactions. Transactions are signed with what is called a Ring Signature which is derived from the output being spent, proving that one key out of the group of keys possesses the spend key. This is done by creating a combined Schnorr signature with your private key and a mix of decoy signers from the public keys of previous transactions. These decoy signers are all mathematically equally valid which results in an inability to determine which one is the real signer. Being that Monero uses Pedersen Commitments shown above, the addresses are never publically visible but are just used for the claiming, signing of transactions and generating blinding factors.
Mimblewimble, on the other hand, does not use addresses of any type. Yes. That’s right, no addresses. This is the true brilliance of the protocol. What Jedusor proved was that the blinding factors within the Pedersen commit and the commit-to-zero can be used as single-use public/private key pairs to create and sign transactions.
All address based protocols using elliptic-curve cryptography generate public-private key pairs in essentially the same way. By multiplying a very large random number (k_priv) by a point (G) on an elliptic curve, the result (K_pub) is another valid point on the same curve.
https://preview.redd.it/pt2xr33i7p331.png?width=800&format=png&auto=webp&s=1785cebcc842cab19b3987d848b2029032ae1195
This serves as the core of all address generation. Does that look familiar?
Remember this commit from above:
https://preview.redd.it/w9ooxudk7p331.png?width=800&format=png&auto=webp&s=d94ad3ac103352aa4c9653934d61cccc25a6bf8f
Each blinding factor multiplied by generator point G (in red) is exactly that! r•G is the public key with private key r! So instead of using addresses, we can use these blinding factors as proof we own the inputs and outputs by using these values to build the signature.
This seemingly minor change removes the linkability of addresses and the need for a scriptSig process to check for signature validity, which greatly simplifies the structure and size of Confidential Transactions. Of course, this means (at this time) that the transaction process requires interaction between parties to create signatures.

CoinJoin

Even though all addresses and amounts are now hidden, there is still some information that can be gathered from the transactions. In the above transaction format, it is still clear which outputs are consumed and what comes out of the transaction. This “transaction graph” can reveal information about the owners of the blinding factors and build a picture of the user based on seen transaction activity. In order to further hide and condense information, Mimblewimble implements an idea from Greg Maxwell called CoinJoin [Max13] which was originally developed for use in Bitcoin. CoinJoin is a trustless method for combining multiple inputs and outputs from multiple transactions, joining them into a single transaction. What this does is a mask that sender paid which recipient. To accomplish this in Bitcoin, users or wallets must interact to join transactions of like amounts so you cannot distinguish one from the other. If you were able to combine signatures without sharing private keys, you could create a combined signature for many transactions (like ring signatures) and not be bound by needing like amounts.

In this CoinJoin tx, 3 addresses have 4 outputs with no way of correlating who sent what
In Mimblewimble, doing the balance calculation for one transaction or many transactions still works out to a valid commit-to-zero. All we would need to do is to create a combined signature for the combined transaction. Mimblewimble is innately enabled to construct these combined signatures with the commit of Schnorr challenge transaction construction. Using “one-way aggregate signatures” (OWAS), nodes can combine transactions, while creating the block, into a single transaction with one aggregate signature. Using this, Mimblewimble joins all transactions at the block level, effectively creating each block as one big transaction of all inputs consumed and all outputs created. This simultaneously blurs the transaction graph and has the power to remove in-between transactions that were spent during the block, cutting down the total size of blocks and the size of the blockchain.

Cut-through

We can take this one step further. To validate this fully “joined” block, the node would sum all of the output commitments together, then subtract all the input commitments and validate that the result is a valid commit-to-zero. What is stopping us from only joining the transactions within a block? We could theoretically combine two blocks, removing any transactions that are created and spent in those blocks, and the result again is a valid transaction of just unspent commitments and nothing else. We could then do this all the way back to the genesis block, reducing the whole blockchain to just a state of unspent commitments. This is called Cut-through. When doing this, we don’t have any need to retain the range proofs of spent outputs, they have been verified and can be discarded. This lends itself to a massive reduction in blockchain growth, reducing growth from O*(number of txs)* to O*(number of unspent outputs)*.
To illustrate the impact of this, let’s imagine if Mimblewimble was implemented in the Bitcoin network from the beginning, with the network at block 576,000, the blockchain is about 210 GB with 413,675,000 total transactions and 55,400,000 total unspent outputs. In Mimblewimble, transaction outputs are about 5 kB (including range proof ~5 kB and Pedersen commit ~33 bytes), transaction inputs are about 32 bytes and transaction proof are about 105 bytes (commit-to-zero and signature), block headers are about 250 bytes (Merkle proof and PoW) and non-confidential transactions are negligible. This sums up to a staggering 5.3 TB for a full sync blockchain of all information, with “only” 279 GB of that being the UTXOs. When we cut-through, we don’t want to lose all the history of transactions, so we retain the proofs for all transactions as well as the UTXO set and all block headers. This reduces the blockchain to 322 GB, a 94% reduction in size. The result is basically a total consensus state of only that which has not been spent with a full proof history, greatly reducing the amount of sync time for new nodes.
If Bulletproofs are implemented, the range proof is reduced from over 5kB to less than 1 kB, dropping the UTXO set in the above example from 279 GB to 57 GB.

*Based on the assumptions and calculations above.

There is also an interesting implication in PoS blockchains with explicit finality. Once finality has been obtained, or at some arbitrary blockchain depth beyond it, there is no longer the need to retain range proofs. Those transactions have been validated, the consensus state has been built upon it and they make up the vast majority of the blockchain size. If we say in this example that finality happens at 100 blocks deep, and assume that 10% of the UTXO set is pre-finality, this would reduce the blockchain size by another 250 GB, resulting in a full sync weight of 73 GB, a 98.6% reduction (even down 65% from its current state). Imagine this. A 73 GB blockchain for 10 years of fully anonymous Bitcoin transactions, and one third the current blockchain size.
It’s important to note that cut-through has no impact on privacy or security. Each node may choose whether or not to store the entire chain without performing any cut-through with the only cost being increased disk storage requirements. Cut-through is purely a scalability feature resulting in Mimblewimble based blockchains being on average three times smaller than Bitcoin and fifteen times smaller than Monero (even with the recent implementation of Bulletproofs).

What does this mean for INT and IoT?

Transactions within an IoT network require speed, scaling to tremendous volumes, adapting to a variety of uses and devices with the ability to keep sensitive information private. Up till now, IoT networks have focused solely on scaling, creating networks that can transact with tremendous volume with varying degrees of decentralization and no focus on privacy. Without privacy, these networks will just make those who use it targets who feed their attackers the ammunition.
Mimblewimble’s revolutionary use of elliptic-curve cryptography brings us a privacy protocol using Pedersen commitments for fully confidential transactions and in the process, removes the dependence on addresses and private keys in the way we are used to them. This transaction framework combined with Bulletproofs brings lightweight privacy and anonymity on par with Monero, in a blockchain that is 15 times smaller, utilizing full cut-through. This provides the solution to private transactions that fit the scalability requirements of the INT network.
The Mimblewimble protocol has been implemented in two different live networks, Grin and Beam. Both are purely transactional networks, focused on the private and anonymous transfer of value. Grin has taken a Bitcoin-like approach with community-funded development, no pre-mine or founders reward while Beam has the mindset of a startup, with VC funding and a large emphasis on a user-friendly experience.
INT, on the other hand, is researching implementing this protocol either on the main chain, creating all INT asset transfer private or as an optional and add-on subchain, allowing users to transfer their INT from non-private chain to the private chain, or vice versa, at will.

Where it falls short?

What makes this protocol revolutionary is the same thing that limits it. Almost all protocols, like Bitcoin, Ethereum, etc., use a basic scripting language with a function calls out in the actual transaction data that tells the verifier what script to use to validate it. In the simplest case, the data provided with the input calls “scriptSig” and provides two pieces of data, the signature that matches the transaction and the public key that proves you own the private key that created it. The output scripts use this provided data with the logic passed with it, to show the validator how to prove they are allowed to spend it. Using the public key provided, the validator then hashes it, checks that it matches the hashed public key in the output, if it does, it then checks to make sure the signature provided matches the input signature.
https://preview.redd.it/5u6m1eiv7p331.png?width=1200&format=png&auto=webp&s=3729eb12037107ae744d15cea9f9bc1e18a3c719
This verification protocol allows some limited scripting ability in being able to tell validators what to do with the data provided. The Bitcoin network can be updated with new functions allowing it to adapt to new processes or data. Using this, the Bitcoin protocol can verify multiple signatures, lock transactions for a defined timespan and do more complex things like lock bitcoin in an account until some outside action is taken.
In order to achieve more widely applicable public smart contracts like those in Ethereum, they need to be provided data in a non-shielded way or create shielded proofs that prove you satisfy the smart contract conditions.
In Mimblewimble, as a consequence of using the blinding factors as the key pairs, greatly simplifying the signature verification process, there are no normal scripting opportunities in the base protocol. What is recorded on the blockchain is just:

https://preview.redd.it/dwhiuc8y7p331.png?width=1200&format=png&auto=webp&s=69ea0a7797bc94a9766a4b31a639666bf9f1ebc4
  • Inputs used — which are old commits consumed
  • New outputs — which are new commits to publish
  • Transaction kernel — which contains the signature for the transaction with excess blinding factor, transaction fee, and lock_height.
And none of these items can be related to one another and contain no useful data to drive action.
There are some proposals for creative solutions to this problem by doing so-called scriptless-scripts†. By utilizing the properties of the Schnorr signatures used, you can achieve multisig transactions and more complex condition-based transactions like atomic cross-chain swaps and maybe even lightning network type state channels. Still, this is not enough complexity to fulfill all the needs of IoT smart contracts.
And on top of it all, implementing cut-through would remove transactions that might be smart contracts or rely on them.
So you can see in this design we can successfully hide values and ownership but only for a single dimensional data point, quantity. Doing anything more complex than transferring ownership of coin is beyond its capabilities. But the proof of ownership and commit-to-zero is really just a specific type of Zero-knowledge (ZK) proof. So, what if, instead of blinding a value we blind a proof?
Part 2 of this series will cover implementing private smart contracts with zkSNARKs.

References and Notes

https://github.com/ignopeverell/grin/blob/mastedoc/intro.md
https://github.com/mimblewimble/grin/blob/mastedoc/pow/pow.md
https://github.com/mimblewimble/grin/wiki/Grin-and-MimbleWimble-vs-ZCash
https://bitcointalk.org/index.php?topic=30579
[poel16] http://diyhpl.us/~bryan/papers2/bitcoin/mimblewimble-andytoshi-INCOMPLETE-DRAFT-2016-10-06-001.pdf
** In order to prove that v=0 and therefore the commit to zero, in fact, has no Hcomponent without revealing r, we must use Schnorr protocol:
prover generates random integer n, computes and sends point 𝑇←n𝐻
verifier generates and sends random integer 𝑖
prover computes and sends integer 𝑠←𝑖𝑏+n modq, where q is the (public) order of the curve
verifier knowing point r𝐻 computes point 𝑖(r𝐻), then point 𝑖(r𝐻)+𝑇; computes point 𝑠𝐻; and ensures 𝑖(r𝐻)+𝑇=𝑠𝐻.
[Arvan19] https://medium.com/@brandonarvanaghi/grin-transactions-explained-step-by-step-fdceb905a853
[Bulletproofs] https://eprint.iacr.org/2017/1066.pdf
[Max13] https://bitcointalk.org/?topic=279249
[MaxCT]https://people.xiph.org/~greg/confidential_values.txt
[Back13]https://bitcointalk.org/index.php?topic=305791.0
http://diyhpl.us/wiki/transcripts/grincon/2019/scriptless-scripts-with-mimblewimble/
https://tlu.tarilabs.com/cryptography/scriptless-scripts/introduction-to-scriptless-scripts.html#list-of-scriptless-scripts
http://diyhpl.us/~bryan/papers2/bitcoin/2017-03-mit-bitcoin-expo-andytoshi-mimblewmble-scriptless-scripts.pdf
submitted by INTCHAIN to INT_Chain [link] [comments]

[HIRING] Corrupt 7-Zip repair

Here is all the info:
7Zip on Linux:
7-Zip [64] 16.02 : Copyright (c) 1999-2016 Igor Pavlov : 2016-05-21 p7zip Version 16.02 (locale=en_GB.UTF-8,Utf16=on,HugeFiles=on,64 bits,4 CPUs AMD FX(tm)-4300 Quad-Core Processor (600F20),ASM,AES-NI)
Scanning the drive for archives: 1 file, 16835616 bytes (17 MiB)
Extracting archive: 20181101.7z ERROR: 20181101.7z 20181101.7z Open ERROR: Can not open the file as [7z] archive
ERRORS: Unexpected end of archive
Can't open as archive: 1 Files: 0 Size: 0 Compressed: 0
File details:
First 8 rows (until 0 to 89 ?)
37 7A BC AF 27 1C 00 04 50 94 26 AB E8 A3 0B 01 00 00 00 00 62 00 00 00 00 00 00 00 A0 32 8C 34 EC D5 3B E0 6A 5D 00 2A 10 46 45 B6 98 6C 31 43 8A 81 BA C8 4A FD 52 B4 46 38 B5 C9 89 6D 2B DA A1 B6 DE DC D5 98 C8 31 F3 E4 48 62 FB 01 88 A9 BA EC 5B 9F 8A 7F 3D 25 F7 7A F9 16 07 C4 07 5C D3 52 8B 7D E5 1B 0D E6 6F 28 BB 9F D2 4B C3 10 58 76 E4 A2 F2 F6 BC 34 A6 99 09 2C 39 01 F7 3E FA 2C A1 80 C8 84 57 CA 5B 28 92 92 0C 2B C6 FE
Last 8 rows (FROM 01012810 to 01012889)
7F C8 28 01 A2 CF A0 47 B7 95 9B 90 34 EB F8 99 0F E3 EA A0 F8 FB 7A AD 81 83 35 E8 77 08 48 A8 03 E7 1F 9D 1F B6 02 FF B7 0C E6 A0 D1 20 EB 45 0A B8 66 68 38 96 3E 1A F6 BB 5F 64 2D 94 D8 CF 52 BF F1 18 4D E6 7F 32 C1 B0 30 A6 BA 9C 49 4E 0E C5 12 FA 28 FD B5 9F CD A3 2F 2A C7 D7 EC 40 DD B4 C4 4B 2F DA 8F 8F 55 55 73 89 09 4F 5E E5 39 30 A3 5C FF 70 3F 47 CC 2B 01 38 E5 6F 43 7B
I have not really tried anything, I have been looking at https://www.7-zip.org/recover.html but haven't the time to do it myself. Of course I'll hand the 7zip file over. I can pay in Bitcoins or Paypal.
submitted by laci420 to Jobs4Bitcoins [link] [comments]

Preventing double-spends is an "embarrassingly parallel" massive search problem - like Google, [email protected], [email protected], or PrimeGrid. BUIP024 "address sharding" is similar to Google's MapReduce & Berkeley's BOINC grid computing - "divide-and-conquer" providing unlimited on-chain scaling for Bitcoin.

TL;DR: Like all other successful projects involving "embarrassingly parallel" search problems in massive search spaces, Bitcoin can and should - and inevitably will - move to a distributed computing paradigm based on successful "sharding" architectures such as Google Search (based on Google's MapReduce algorithm), or [email protected], [email protected], or PrimeGrid (based on Berkeley's BOINC grid computing architecture) - which use simple mathematical "decompose" and "recompose" operations to break big problems into tiny pieces, providing virtually unlimited scaling (plus fault tolerance) at the logical / software level, on top of possibly severely limited (and faulty) resources at the physical / hardware level.
The discredited "heavy" (and over-complicated) design philosophy of centralized "legacy" dev teams such as Core / Blockstream (requiring every single node to download, store and verify the massively growing blockchain, and pinning their hopes on non-existent off-chain vaporware such as the so-called "Lightning Network" which has no mathematical definition and is missing crucial components such as decentralized routing) is doomed to failure, and will be out-competed by simpler on-chain "lightweight" distributed approaches such as distributed trustless Merkle trees or BUIP024's "Address Sharding" emerging from independent devs such as u/thezerg1 (involved with Bitcoin Unlimited).
No one in their right mind would expect Google's vast search engine to fit entirely on a Raspberry Pi behind a crappy Internet connection - and no one in their right mind should expect Bitcoin's vast financial network to fit entirely on a Raspberry Pi behind a crappy Internet connection either.
Any "normal" (ie, competent) company with $76 million to spend could provide virtually unlimited on-chain scaling for Bitcoin in a matter of months - simply by working with devs who would just go ahead and apply the existing obvious mature successful tried-and-true "recipes" for solving "embarrassingly parallel" search problems in massive search spaces, based on standard DISTRIBUTED COMPUTING approaches like Google Search (based on Google's MapReduce algorithm), or [email protected], [email protected], or PrimeGrid (based on Berkeley's BOINC grid computing architecture). The fact that Blockstream / Core devs refuse to consider any standard DISTRIBUTED COMPUTING approaches just proves that they're "embarrassingly stupid" - and the only way Bitcoin will succeed is by routing around their damage.
Proven, mature sharding architectures like the ones powering Google Search, [email protected], [email protected], or PrimeGrid will allow Bitcoin to achieve virtually unlimited on-chain scaling, with minimal disruption to the existing Bitcoin network topology and mining and wallet software.
Longer Summary:
People who argue that "Bitcoin can't scale" - because it involves major physical / hardware requirements (lots of processing power, upload bandwidth, storage space) - are at best simply misinformed or incompetent - or at worst outright lying to you.
Bitcoin mainly involves searching the blockchain to prevent double-spends - and so it is similar to many other projects involving "embarrassingly parallel" searching in massive search spaces - like Google Search, [email protected], [email protected], or PrimeGrid.
But there's a big difference between those long-running wildly successful massively distributed infinitely scalable parallel computing projects, and Bitcoin.
Those other projects do their data storage and processing across a distributed network. But Bitcoin (under the misguided "leadership" of Core / Blockstream devs) instists on a fatally flawed design philosophy where every individual node must be able to download, store and verify the system's entire data structure. And it's even wore than that - they want to let the least powerful nodes in the system dictate the resource requirements for everyone else.
Meanwhile, those other projects are all based on some kind of "distributed computing" involving "sharding". They achieve massive scaling by adding a virtually unlimited (and fault-tolerant) logical / software layer on top of the underlying resource-constrained / limited physical / hardware layer - using approaches like Google's MapReduce algorithm or Berkeley's Open Infrastructure for Network Computing (BOINC) grid computing architecture.
This shows that it is a fundamental error to continue insisting on viewing an individual Bitcoin "node" as the fundamental "unit" of the Bitcoin network. Coordinated distributed pools already exist for mining the blockchain - and eventually coordinated distributed trustless architectures will also exist for verifying and querying it. Any architecture or design philosophy where a single "node" is expected to be forever responsible for storing or verifying the entire blockchain is the wrong approach, and is doomed to failure.
The most well-known example of this doomed approach is Blockstream / Core's "roadmap" - which is based on two disastrously erroneous design requirements:
  • Core / Blockstream erroneously insist that the entire blockchain must always be downloadable, storable and verifiable on a single node, as dictated by the least powerful nodes in the system (eg, u/bitusher in Costa Rica), or u/Luke-Jr in the underserved backwoods of Florida); and
  • Core / Blockstream support convoluted, incomplete off-chain scaling approaches such as the so-called "Lightning Network" - which lacks a mathematical foundation, and also has some serious gaps (eg, no solution for decentralized routing).
Instead, the future of Bitcoin will inevitably be based on unlimited on-chain scaling, where all of Bitcoin's existing algorithms and data structures and networking are essentially preserved unchanged / as-is - but they are distributed at the logical / software level using sharding approaches such as u/thezerg1's BUIP024 or distributed trustless Merkle trees.
These kinds of sharding architectures will allow individual nodes to use a minimum of physical resources to access a maximum of logical storage and processing resources across a distributed network with virtually unlimited on-chain scaling - where every node will be able to use and verify the entire blockchain without having to download and store the whole thing - just like Google Search, [email protected], [email protected], or PrimeGrid and other successful distributed sharding-based projects have already been successfully doing for years.
Details:
Sharding, which has been so successful in many other areas, is a topic that keeps resurfacing in various shapes and forms among independent Bitcoin developers.
The highly successful track record of sharding architectures on other projects involving "embarrassingly parallel" massive search problems (harnessing resource-constrained machines at the physical level into a distributed network at the logical level, in order to provide fault tolerance and virtually unlimited scaling searching for web pages, interstellar radio signals, protein sequences, or prime numbers in massive search spaces up to hundreds of terabytes in size) provides convincing evidence that sharding architectures will also work for Bitcoin (which also requires virtually unlimited on-chain scaling, searching the ever-expanding blockchain for previous "spends" from an existing address, before appending a new transaction from this address to the blockchain).
Below are some links involving proposals for sharding Bitcoin, plus more discussion and related examples.
BUIP024: Extension Blocks with Address Sharding
https://np.reddit.com/btc/comments/54afm7/buip024_extension_blocks_with_address_sharding/
Why aren't we as a community talking about Sharding as a scaling solution?
https://np.reddit.com/Bitcoin/comments/3u1m36/why_arent_we_as_a_community_talking_about/
(There are some detailed, partially encouraging comments from u/petertodd in that thread.)
[Brainstorming] Could Bitcoin ever scale like BitTorrent, using something like "mempool sharding"?
https://np.reddit.com/btc/comments/3v070a/brainstorming_could_bitcoin_ever_scale_like/
[Brainstorming] "Let's Fork Smarter, Not Harder"? Can we find some natural way(s) of making the scaling problem "embarrassingly parallel", perhaps introducing some hierarchical (tree) structures or some natural "sharding" at the level of the network and/or the mempool and/or the blockchain?
https://np.reddit.com/btc/comments/3wtwa7/brainstorming_lets_fork_smarter_not_harder_can_we/
"Braiding the Blockchain" (32 min + Q&A): We can't remove all sources of latency. We can redesign the "chain" to tolerate multiple simultaneous writers. Let miners mine and validate at the same time. Ideal block time / size / difficulty can become emergent per-node properties of the network topology
https://np.reddit.com/btc/comments/4su1gf/braiding_the_blockchain_32_min_qa_we_cant_remove/
Some kind of sharding - perhaps based on address sharding as in BUIP024, or based on distributed trustless Merkle trees as proposed earlier by u/thezerg1 - is very likely to turn out to be the simplest, and safest approach towards massive on-chain scaling.
A thought experiment showing that we already have most of the ingredients for a kind of simplistic "instant sharding"
A simplistic thought experiment can be used to illustrate how easy it could be to do sharding - with almost no changes to the existing Bitcoin system.
Recall that Bitcoin addresses and keys are composed from an alphabet of 58 characters. So, in this simplified thought experiment, we will outline a way to add a kind of "instant sharding" within the existing system - by using the last character of each address in order to assign that address to one of 58 shards.
(Maybe you can already see where this is going...)
Similar to vanity address generation, a user who wants to receive Bitcoins would be required to generate 58 different receiving addresses (each ending with a different character) - and, similarly, miners could be required to pick one of the 58 shards to mine on.
Then, when a user wanted to send money, they would have to look at the last character of their "send from" address - and also select a "send to" address ending in the same character - and presto! we already have a kind of simplistic "instant sharding". (And note that this part of the thought experiment would require only the "softest" kind of soft fork: indeed, we haven't changed any of the code at all, but instead we simply adopted a new convention by agreement, while using the existing code.)
Of course, this simplistic "instant sharding" example would still need a few more features in order to be complete - but they'd all be fairly straightforward to provide:
  • A transaction can actually send from multiple addresses, to multiple addresses - so the approach of simply looking at the final character of a single (receive) address would not be enough to instantly assign a transaction to a particular shard. But a slightly more sophisticated decision criterion could easily be developed - and computed using code - to assign every transaction to a particular shard, based on the "from" and "to" addresses in the transaction. The basic concept from the "simplistic" example would remain the same, sharding the network based on some characteristic of transactions.
  • If we had 58 shards, then the mining reward would have to be decreased to 1/58 of what it currently is - and also the mining hash power on each of the shards would end up being roughly 1/58 of what it is now. In general, many people might agree that decreased mining rewards would actually be a good thing (spreading out mining rewards among more people, instead of the current problems where mining is done by about 8 entities). Also, network hashing power has been growing insanely for years, so we probably have way more than enough needed to secure the network - after all, Bitcoin was secure back when network hash power was 1/58 of what it is now.
  • This simplistic example does not handle cases where you need to do "cross-shard" transactions. But it should be feasible to implement such a thing. The various proposals from u/thezerg1 such as BUIP024 do deal with "cross-shard" transactions.
(Also, the fact that a simplified address-based sharding mechanics can be outlined in just a few paragraphs as shown here suggests that this might be "simple and understandable enough to actually work" - unlike something such as the so-called "Lightning Network", which is actually just a catchy-sounding name with no clearly defined mechanics or mathematics behind it.)
Addresses are plentiful, and can be generated locally, and you can generate addresses satisfying a certain pattern (eg ending in a certain character) the same way people can already generate vanity addresses. So imposing a "convention" where the "send" and "receive" address would have to end in the same character (and where the miner has to only mine transactions in that shard) - would be easy to understand and do.
Similarly, the earlier solution proposed by u/thezerg1, involving distributed trustless Merkle trees, is easy to understand: you'd just be distributing the Merkle tree across multiple nodes, while still preserving its immutablity guarantees.
Such approaches don't really change much about the actual system itself. They preserve the existing system, and just split its data structures into multiple pieces, distributed across the network. As long as we have the appropriate operators for decomposing and recomposing the pieces, then everything should work the same - but more efficiently, with unlimited on-chain scaling, and much lower resource requirements.
The examples below show how these kinds of "sharding" approaches have already been implemented successfully in many other systems.
Massive search is already efficiently performed with virtually unlimited scaling using divide-and-conquer / decompose-and-recompose approaches such as MapReduce and BOINC.
Every time you do a Google search, you're using Google's MapReduce algorithm to solve an embarrassingly parallel problem.
And distributed computing grids using the Berkeley Open Infrastructure for Network Computing (BOINC) are constantly setting new records searching for protein combinations, prime numbers, or radio signals from possible intelligent life in the universe.
We all use Google to search hundreds of terabytes of data on the web and get results in a fraction of a second - using cheap "commodity boxes" on the server side, and possibly using limited bandwidth on the client side - with fault tolerance to handle crashing servers and dropped connections.
Other examples are [email protected], [email protected] and PrimeGrid - involving searching massive search spaces for protein sequences, interstellar radio signals, or prime numbers hundreds of thousands of digits long. Each of these examples uses sharding to decompose a giant search space into smaller sub-spaces which are searched separately in parallel and then the resulting (sub-)solutions are recomposed to provide the overall search results.
It seems obvious to apply this tactic to Bitcoin - searching the blockchain for existing transactions involving a "send" from an address, before appending a new "send" transaction from that address to the blockchain.
Some people might object that those systems are different from Bitcoin.
But we should remember that preventing double-spends (the main thing that the Bitcoin does) is, after all, an embarrassingly parallel massive search problem - and all of these other systems also involve embarrassingly parallel massive search problems.
The mathematics of Google's MapReduce and Berkeley's BOINC is simple, elegant, powerful - and provably correct.
Google's MapReduce and Berkeley's BOINC have demonstrated that in order to provide massive scaling for efficient searching of massive search spaces, all you need is...
  • an appropriate "decompose" operation,
  • an appropriate "recompose" operation,
  • the necessary coordination mechanisms
...in order to distribute a single problem across multiple, cheap, fault-tolerant processors.
This allows you to decompose the problem into tiny sub-problems, solving each sub-problem to provide a sub-solution, and then recompose the sub-solutions into the overall solution - gaining virtually unlimited scaling and massive efficiency.
The only "hard" part involves analyzing the search space in order to select the appropriate DECOMPOSE and RECOMPOSE operations which guarantee that recomposing the "sub-solutions" obtained by decomposing the original problem is equivalent to the solving the original problem. This essential property could be expressed in "pseudo-code" as follows:
  • (DECOMPOSE ; SUB-SOLVE ; RECOMPOSE) = (SOLVE)
Selecting the appropriate DECOMPOSE and RECOMPOSE operations (and implementing the inter-machine communication coordination) can be somewhat challenging, but it's certainly doable.
In fact, as mentioned already, these things have already been done in many distributed computing systems. So there's hardly any "original work to be done in this case. All we need to focus on now is translating the existing single-processor architecture of Bitcoin to a distributed architecture, adopting the mature, proven, efficient "recipes" provided by the many examples of successful distributed systems already up and running like such as Google Search (based on Google's MapReduce algorithm), or [email protected], [email protected], or PrimeGrid (based on Berkeley's BOINC grid computing architecture).
That's what any "competent" company with $76 million to spend would have done already - simply work with some devs who know how to implement open-source distributed systems, and focus on adapting Bitcoin's particular data structures (merkle trees, hashed chains) to a distributed environment. That's a realistic roadmap that any team of decent programmers with distributed computing experience could easily implement in a few months, and any decent managers could easily manage and roll out on a pre-determined schedule - instead of all these broken promises and missed deadlines and non-existent vaporware and pathetic excuses we've been getting from the incompetent losers and frauds involved with Core / Blockstream.
ASIDE: MapReduce and BOINC are based on math - but the so-called "Lightning Network" is based on wishful thinking involving kludges on top of workarounds on top of hacks - which is how you can tell that LN will never work.
Once you have succeeded in selecting the appropriate mathematical DECOMPOSE and RECOMPOSE operations, you get simple massive scaling - and it's also simple for anyone to verify that these operations are correct - often in about a half-page of math and code.
An example of this kind of elegance and brevity (and provable correctness) involving compositionality can be seen in this YouTube clip by the accomplished mathematician Lucius Greg Meredith presenting some operators for scaling Ethereum - in just a half page of code:
https://youtu.be/uzahKc_ukfM?t=1101
Conversely, if you fail to select the appropriate mathematical DECOMPOSE and RECOMPOSE operations, then you end up with a convoluted mess of wishful thinking - like the "whitepaper" for the so-called "Lightning Network", which is just a cool-sounding name with no actual mathematics behind it.
The LN "whitepaper" is an amateurish, non-mathematical meandering mishmash of 60 pages of "Alice sends Bob" examples involving hacks on top of workarounds on top of kludges - also containing a fatal flaw (a lack of any proposed solution for doing decentralized routing).
The disaster of the so-called "Lightning Network" - involving adding never-ending kludges on top of hacks on top of workarounds (plus all kinds of "timing" dependencies) - is reminiscent of the "epicycles" which were desperately added in a last-ditch attempt to make Ptolemy's "geocentric" system work - based on the incorrect assumption that the Sun revolved around the Earth.
This is how you can tell that the approach of the so-called "Lightning Network" is simply wrong, and it would never work - because it fails to provide appropriate (and simple, and provably correct) mathematical DECOMPOSE and RECOMPOSE operations in less than a single page of math and code.
Meanwhile, sharding approaches based on a DECOMPOSE and RECOMPOSE operation are simple and elegant - and "functional" (ie, they don't involve "procedural" timing dependencies like keeping your node running all the time, or closing out your channel before a certain deadline).
Bitcoin only has 6,000 nodes - but the leading sharding-based projects have over 100,000 nodes, with no financial incentives.
Many of these sharding-based projects have many more nodes than the Bitcoin network.
The Bitcoin network currently has about 6,000 nodes - even though there are financial incentives for running a node (ie, verifying your own Bitcoin balance.
[email protected] and [email protected] each have over 100,000 active users - even though these projects don't provide any financial incentives. This higher number of users might be due in part the the low resource demands required in these BOINC-based projects, which all are based on sharding the data set.
[email protected]
As part of the client-server network architecture, the volunteered machines each receive pieces of a simulation (work units), complete them, and return them to the project's database servers, where the units are compiled into an overall simulation.
In 2007, Guinness World Records recognized [email protected] as the most powerful distributed computing network. As of September 30, 2014, the project has 107,708 active CPU cores and 63,977 active GPUs for a total of 40.190 x86 petaFLOPS (19.282 native petaFLOPS). At the same time, the combined efforts of all distributed computing projects under BOINC totals 7.924 petaFLOPS.
[email protected]
Using distributed computing, [email protected] sends the millions of chunks of data to be analyzed off-site by home computers, and then have those computers report the results. Thus what appears an onerous problem in data analysis is reduced to a reasonable one by aid from a large, Internet-based community of borrowed computer resources.
Observational data are recorded on 2-terabyte SATA hard disk drives at the Arecibo Observatory in Puerto Rico, each holding about 2.5 days of observations, which are then sent to Berkeley. Arecibo does not have a broadband Internet connection, so data must go by postal mail to Berkeley. Once there, it is divided in both time and frequency domains work units of 107 seconds of data, or approximately 0.35 megabytes (350 kilobytes or 350,000 bytes), which overlap in time but not in frequency. These work units are then sent from the [email protected] server over the Internet to personal computers around the world to analyze.
Data is merged into a database using [email protected] computers in Berkeley.
The [email protected] distributed computing software runs either as a screensaver or continuously while a user works, making use of processor time that would otherwise be unused.
Active users: 121,780 (January 2015)
PrimeGrid
PrimeGrid is a distributed computing project for searching for prime numbers of world-record size. It makes use of the Berkeley Open Infrastructure for Network Computing (BOINC) platform.
Active users 8,382 (March 2016)
MapReduce
A MapReduce program is composed of a Map() procedure (method) that performs filtering and sorting (such as sorting students by first name into queues, one queue for each name) and a Reduce() method that performs a summary operation (such as counting the number of students in each queue, yielding name frequencies).
How can we go about developing sharding approaches for Bitcoin?
We have to identify a part of the problem which is in some sense "invariant" or "unchanged" under the operations of DECOMPOSE and RECOMPOSE - and we also have to develop a coordination mechanism which orchestrates the DECOMPOSE and RECOMPOSE operations among the machines.
The simplistic thought experiment above outlined an "instant sharding" approach where we would agree upon a convention where the "send" and "receive" address would have to end in the same character - instantly providing a starting point illustrating some of the mechanics of an actual sharding solution.
BUIP024 involves address sharding and deals with the additional features needed for a complete solution - such as cross-shard transactions.
And distributed trustless Merkle trees would involve storing Merkle trees across a distributed network - which would provide the same guarantees of immutability, while drastically reducing storage requirements.
So how can we apply ideas like MapReduce and BOINC to providing massive on-chain scaling for Bitcoin?
First we have to examine the structure of the problem that we're trying to solve - and we have to try to identify how the problem involves a massive search space which can be decomposed and recomposed.
In the case of Bitcoin, the problem involves:
  • sequentializing (serializing) APPEND operations to a blockchain data structure
  • in such a way as to avoid double-spends
Can we view "preventing Bitcoin double-spends" as a "massive search space problem"?
Yes we can!
Just like Google efficiently searches hundreds of terabytes of web pages for a particular phrase (and [email protected], [email protected], PrimeGrid etc. efficiently search massive search spaces for other patterns), in the case of "preventing Bitcoin double-spends", all we're actually doing is searching a massive seach space (the blockchain) in order to detect a previous "spend" of the same coin(s).
So, let's imagine how a possible future sharding-based architecture of Bitcoin might look.
We can observe that, in all cases of successful sharding solutions involving searching massive search spaces, the entire data structure is never stored / searched on a single machine.
Instead, the DECOMPOSE and RECOMPOSE operations (and the coordination mechanism) a "virtual" layer or grid across multiple machines - allowing the data structure to be distributed across all of them, and allowing users to search across all of them.
This suggests that requiring everyone to store 80 Gigabytes (and growing) of blockchain on their own individual machine should no longer be a long-term design goal for Bitcoin.
Instead, in a sharding environment, the DECOMPOSE and RECOMPOSE operations (and the coordination mechanism) should allow everyone to only store a portion of the blockchain on their machine - while also allowing anyone to search the entire blockchain across everyone's machines.
This might involve something like BUIP024's "address sharding" - or it could involve something like distributed trustless Merkle trees.
In either case, it's easy to see that the basic data structures of the system would remain conceptually unaltered - but in the sharding approaches, these structures would be logically distributed across multiple physical devices, in order to provide virtually unlimited scaling while dramatically reducing resource requirements.
This would be the most "conservative" approach to scaling Bitcoin: leaving the data structures of the system conceptually the same - and just spreading them out more, by adding the appropriately defined mathematical DECOMPOSE and RECOMPOSE operators (used in successful sharding approaches), which can be easily proven to preserve the same properties as the original system.
Conclusion
Bitcoin isn't the only project in the world which is permissionless and distributed.
Other projects (BOINC-based permisionless decentralized [email protected], [email protected], and PrimeGrid - as well as Google's (permissioned centralized) MapReduce-based search engine) have already achieved unlimited scaling by providing simple mathematical DECOMPOSE and RECOMPOSE operations (and coordination mechanisms) to break big problems into smaller pieces - without changing the properties of the problems or solutions. This provides massive scaling while dramatically reducing resource requirements - with several projects attracting over 100,000 nodes, much more than Bitcoin's mere 6,000 nodes - without even offering any of Bitcoin's financial incentives.
Although certain "legacy" Bitcoin development teams such as Blockstream / Core have been neglecting sharding-based scaling approaches to massive on-chain scaling (perhaps because their business models are based on misguided off-chain scaling approaches involving radical changes to Bitcoin's current successful network architecture, or even perhaps because their owners such as AXA and PwC don't want a counterparty-free new asset class to succeed and destroy their debt-based fiat wealth), emerging proposals from independent developers suggest that on-chain scaling for Bitcoin will be based on proven sharding architectures such as MapReduce and BOINC - and so we should pay more attention to these innovative, independent developers who are pursuing this important and promising line of research into providing sharding solutions for virtually unlimited on-chain Bitcoin scaling.
submitted by ydtm to btc [link] [comments]

Fraudsters misusing Bitcoin again. This time: "Bottle Browser".

https://themerkle.com/bottle-is-a-browser-capable-of-surfing-the-bitcoin-network/
This article wants to tell the reader that this "browser" will be able to store files, incl. even videos, on the bitcoin blockchain, and creates a completely false impression!
It is nonsense because one can only store 80 bytes of arbitrary data per TX via op_return. With an estimated 2000 TX/block that is a mere 23 MB of net data per day that could be stored on the blockchain worldwide, per day, leaving no space left for normal bitcoin transactions. It would take 43 days to store a single 1 GB video this way. Worldwide! That's the limit.
And these people are trying to tell us that the blockchain can be used as a data storage for any sorts and types of data, incl. videos. How ridiculous is that!?
submitted by Amichateur to Bitcoin [link] [comments]

Estimating the marginal cost of a transaction on the Bitcoin (Cash) network

Recently, the mempool has not been clearing with every block found. Should we immediately raise the block-size? Perhaps put plans to make the easy relay of sub satoshi/byte transaction on hold?
Assumptions:
Step 1: find the price of storing a transaction
Searching NCIX:
Calculate the price per TB, assuming 1 is 6 drives are used for parity (30 bays usable storage)
Estimate the UTXO premium:
As you can see. miners have a strong incentive to offer free UTXO consolidation transactions: and require bulk UTXO fanning transaction to pay a fee of 494.86sat/kB -- about 0.5 sat/byte. ((0.01249USD/kB)/(2523.96USD/BCH)*100,000,000sat/BCH)
Fees are no where near that high due to the block subsidy. For an 8MB block: 1,250,000,000 satoshies/ 8000 kB -> 156,250sat/kB; or more conventionally: 157satosies/byte.. Note that the block subsidy per kB goes down with larger block-sizes.
Step 2: Estimate Bandwidth costs
Disclaimer: I am not too familiar with commercial bandwidth plans
Exercise to the reader:
Re-do these calculations for hobbyist hardware and internet connections. You probably have to assume a smaller block size: such as 100MB.
Disclaimer: I later learned the site I was using for prices (NCIX) was bankrupt. Not sure how much that would skew prices.
submitted by phillipsjk to btc [link] [comments]

The Strange Birth & History of Monero, Part IV: Monero "as it is now"

You can read here part III.
You can read this whole story translated into Spanish here
This is part IV, the last but not least.
Monero - A secure, private, untreceable cryptocurrency
https://bitcointalk.org/index.php?topic=583449.0
Notable comments in this thread:
-201: “I would like to offer 1000 MRO to the first person who creates a pool”
(https://bitcointalk.org/index.php?topic=583449.msg6422665#msg6422665)
[tacotime offers bounty to potential pool developer. Bytecoin devs haven’t released any code for pools, and the only existent pool, minergate (in the future related to BCN interests) was closed source]
-256: “Adam back seems to like CryptoNote the better than Zerocash https://twitter.com/adam3us/status/453493394472697856”
(https://bitcointalk.org/index.php?topic=583449.msg6440769#msg6440769)
-264: “update on pools: The NOMP guy (zone117x) is looking to fork his open source software and get a pool going, so one should hopefully be up soon.”
(https://bitcointalk.org/index.php?topic=583449.msg6441302#msg6441302)
-273: “Update on GUI: othe from VertCoin has notified me that he is working on it.”
(https://bitcointalk.org/index.php?topic=583449.msg6442606#msg6442606)
-356: “Everyone wanting a pool, please help raise a bounty with me here:
https://bitcointalk.org/index.php?topic=589533.0
And for the GUI:
https://bitcointalk.org/index.php?topic=589561.0”
(https://bitcointalk.org/index.php?topic=583449.msg6461533#msg6461533)
[5439 MRO + 0.685 BTC + 5728555.555 BCN raised for pool and 1652 XMR, 121345.46695471 BCN for the GUI wallet. Though this wallet was "rejected" as official GUI because wallet still has to be polished before building a GUI]
-437: “Yes, most Windows users should see a higher hashrate with the new build. You can thank NoodleDoodle. ”
(https://bitcointalk.org/index.php?topic=583449.msg6481202#msg6481202)
-446: “Even faster Windows binaries have just been uploaded. Install for more hash power! Once again, it was NoodleDoodle.”
(https://bitcointalk.org/index.php?topic=583449.msg6483680#msg6483680)
-448: “that almost doubled my hashrate again! GREAT STUFF !!!”
(https://bitcointalk.org/index.php?topic=583449.msg6484109#msg6484109)
-461: “Noodle only started optimization today so there may be gains for your CPU in the future.”
(https://bitcointalk.org/index.php?topic=583449.msg6485247#msg6485247)
[First day of miner optimization by NoodleDoodle, it is only May 1st]
-706: “The unstoppable NoodleDoodle has optimized the Windows build again. Hashrate should more than double. Windows is now faster than Linux. :O”
(https://bitcointalk.org/index.php?topic=583449.msg6549444#msg6549444)
-753: “i here tft is no longer part of the project. so is he forking or relaunching bytecoin under new name and new parameters (merged mining with flatter emission curve.) also. what is the end consensus for the emission curve for monero. will it be adjusted."
(https://bitcointalk.org/index.php?topic=583449.msg6561345#msg6561345)
[May, 5th 2014. TFT is launching FANTOMCOIN, a clone coin which its "only" feature was merged mining]
-761: (https://bitcointalk.org/index.php?topic=583449.msg6561941#msg6561941) [May, 5th 2014 – eizh on emission curve and tail emission]
-791: “As promised, I did Russian translation of main topic.”
(https://bitcointalk.org/index.php?topic=583449.msg6565521#msg6565521)
[one among dozens of decentralized and “altruist” collaborators of Monero in minor tasks]
-827: image
(https://bitcointalk.org/index.php?topic=583449.msg6571652#msg6571652)
-853: (https://bitcointalk.org/index.php?topic=583449.msg6575033#msg6575033)
[some are not happy that NoodleDoodle had only released the built binaries, but not the source code]
-950: (https://bitcointalk.org/index.php?topic=583449.msg6593768#msg6593768)
[Rias, an account suspected to be related to the Bytecoin scam, dares to tag Monero as “instamine”]
-957: “It's rather bizarre that you're calling this an "instamine" scam when you're so fervently supporting BCN, which was mined 80% before entering the clearnet. Difficulty adjustments are per block, so there is no possibility of an instamine unless you don't publish your blockchain (emission is regular at the preset interval, and scales adequately with the network hash rate). What you're accusing monero of is exactly what ByteCoin did.”
https://bitcointalk.org/index.php?topic=583449.msg6594025#msg6594025
[Discussion with rias drags on for SEVERAL posts]
-1016: “There is no "dev team". There is a community of people working on various aspects of the coin.
I've been keeping the repo up to date. NoodleDoodle likes to optimise his miner. TFT started the fork and also assists when things break. othe's been working on a GUI. zone117x has been working on a pool.
It's a decentralized effort to maintain the fork, not a strawman team of leet hackers who dwell in the underbellies of the internet and conspire for instamines.”
(https://bitcointalk.org/index.php?topic=583449.msg6596828#msg6596828)
-1023: “Like I stated in IRC, I am not part of the "dev team", I never was. Just so happens I took a look at the code and changed some extremely easy to spot "errors". I then decided to release the binary because I thought MRO would benefit from it. I made this decision individually and nobody else should be culpable”
(https://bitcointalk.org/index.php?topic=583449.msg6597057#msg6597057)
[Noodledoodle gets rid of the instaminer accusations]
-1029: “I decided to relaunch Monero so it will suit all your wishes that you had: flatter emission curve, open source optimized miner for everybody from the start, no MM with BCN/BMR and the name. New Monero will be ready tomorrow”
(https://bitcointalk.org/index.php?topic=583449.msg6597252#msg6597252)
[people trying to capitalize mistakes is always there.]
-1030: "Pull request has been submitted and merged to update miner speed
It appears from the simplicity of the fix that there may have been deliberate crippling of the hashing algorithm from introduction with ByteCoin."
https://bitcointalk.org/index.php?topic=583449.msg6597460#msg6597460
[tacotime “officially” raises suspects of possible voluntarily crippled miner]
-1053: "I don't mind the 'relaunch' or the merge-mining fork or any other new coin at all. It's inevitable that the CryptoNote progresses like scrypt into a giant mess of coins. It's not undesirable or 'wrong'. Clones fighting out among themselves is actually beneficial for Monero. Although one of them is clearly unserious and trolling by choosing the same name.
Anyway, this sudden solidarity with BCN or TFT sure is strange when none of these accounts were around for the discussions that took place 3 weeks ago. Such vested interests with no prior indications. Hmm...? "
https://bitcointalk.org/index.php?topic=583449.msg6599013#msg6599013
[eizh points out the apparent organized fudding]
-1061: "There was no takeover. The original developer (who himself did a fork of bytecoin and around a dozen lines of code changes) was non-responsive and had disappeared. The original name had been cybersquatted all over the place (since the original developer did not even register any domain name much less create a web site), making it impossible to even create a suitably named web site. A bunch of us who didn't want to see the coin die who represented a huge share of the hash power and ownership of the coin decided to adopt it. We reached out to the original developer to participate in this community effort and he still didn't respond over 24 hours, so we decided to act to save the coin from neglect and actively work toward building the coin."
(https://bitcointalk.org/index.php?topic=583449.msg6599798#msg6599798)
[smooth defends legitimacy of current “dev team” and decisions taken]
-1074: “Zerocash will be announced soon (May 18 in Oakland? but open source may not be ready then?).
Here is a synopsis of the tradeoffs compared to CyptoNote: […]"
(https://bitcointalk.org/index.php?topic=583449.msg6602891#msg6602891)
[comparison among Zerocash y Cryptonote]
-1083: "Altcoin history shows that except in the case of premine (Tenebrix), the first implementation stays the largest by a wide margin. We're repeating that here by outpacing Bytecoin (thanks to its 80% mine prior to surfacing). No other CN coin has anywhere near the hashrate or trading volume. Go check diff in Fantom for example or the lack of activity in BCN trading.
The only CN coin out there doing something valuable is HoneyPenny, and they're open source too. If HP develops something useful, MRO can incorporate it as well. Open source gives confidence. No need for any further edge."
(https://bitcointalk.org/index.php?topic=583449.msg6603452#msg6603452)
[eizh reminds everyone the “first mover” advantage is a real advantage]
-1132: "I decided to tidy up bitmonero GitHub rep tonight, so now there is all valuable things from latest BCN commits & Win32. Faster hash from quazarcoin is also there. So BMR rep is the freshest one.
I'm working on another good feature now, so stay tuned."
(https://bitcointalk.org/index.php?topic=583449.msg6619738#msg6619738)
[first TFT apparition in weeks, he somehow pretends to still be the "lead dev"]
-1139: "This is not the github or website used by Monero. This github is outdated even with these updates. Only trust binaries from the first post."
(https://bitcointalk.org/index.php?topic=583449.msg6619971#msg6619971)
[eizh tries to clarify the community, after tft interference, which are the official downloads]
-1140: “The faster hash is from NoodleDoodle and is already submitted to the moner-project github (https://github.com/monero-project/bitmonero) and included in the binaries here.
[trying to bring TFT back on board] It would be all easier if you just work together with the other guys, whats the problem? Come to irc and talk like everyone else?
[on future monero exchangers] I got confirmation from one."
(https://bitcointalk.org/index.php?topic=583449.msg6619997#msg6619997)
[8th may 2014, othe announces NoodleDoodle optimized miner is now open source, asks TFT to collaborate and communicates an exchanger is coming]
-1146: "I'll be impressed if they [BCN/TFT shills] manage to come up with an account registered before January, but then again they could buy those.”
(https://bitcointalk.org/index.php?topic=583449.msg6620257#msg6620257)
[smooth]
-1150: “Ring signatures mean that when you sign a transaction to spend an output (coins), no one looking at the block chain can tell whether you signed it or one of the other outputs you choose to mix in with yours. With a mixing factor of 5 or 10 after several transactions there are millions of possible coins all mixed together. You get "anonymity" and mixing without having to use a third party mixer.”
(https://bitcointalk.org/index.php?topic=583449.msg6620433#msg6620433)
[smooth answering to “what are ring signatures” in layman terms]
-1170: "Someone (C++ skilled) did private optimized miner a few days ago, he got 74H/s for i5 haswell. He pointed that mining code was very un-optimized and he did essential improvements for yourself. So, high H/S is possible yet. Can the dev's core review code for that?"
(https://bitcointalk.org/index.php?topic=583449.msg6623136#msg6623136)
[forums are talking about an individual or group of individuals with optimized miners - may 9th 2014]
-1230: "Good progress on the pool reported by NOMP dev zone117x. Stay tuned, everyone.
And remember to email your favorite exchanges about adding MRO."
(https://bitcointalk.org/index.php?topic=583449.msg6640190#msg6640190)
-1258: "This is actually as confusing to us as you. At one point, thankful_for_today said he was okay with name change: https://bitcointalk.org/index.php?topic=563821.msg6368600#msg6368600
Then he disappeared for more than a week after the merge mining vote failed.”
(https://bitcointalk.org/index.php?topic=583449.msg6645981#msg6645981)
[eizh on the TFT-issue]
-1358: “Jadehorse: registered on 2014-03-06 and two pages of one line posts:
https://bitcointalk.org/index.php?action=profile;u=263597
https://bitcointalk.org/index.php?action=profile;u=263597;sa=showPosts
Trustnobody: registered on 2014-03-06 and two pages of one line posts:
https://bitcointalk.org/index.php?action=profile;u=264292
https://bitcointalk.org/index.php?action=profile;u=264292;sa=showPosts
You guys should really just stop trying. It is quite transparent what you are doing. Or if you want to do it, do it somewhere else. Everyone else: ignore them please."
(https://bitcointalk.org/index.php?topic=583449.msg6666844#msg6666844)
[FUD campaign still ongoing, smooth battles it]
-1387: "The world’s first exchange for Monero just opened! cryptonote.exchange.to"
(https://bitcointalk.org/index.php?topic=583449.msg6675902#msg6675902)
[David Latapie announces an important milestone: exchanger is here]
-1467: "image"
(https://bitcointalk.org/index.php?topic=583449.msg6686125#msg6686125)
[it is weird, but tft appears again, apparently as if he were in a parallel reality]
-1495: “http://monero.cc/blog/monero-price-0-002-passed/”
(https://bitcointalk.org/index.php?topic=583449.msg6691706#msg6691706)
[“trading” milestone reached: monero surpassed for first time 0.002 btc price]
-1513: "There is one and only one coin, formerly called Bitmonero, now called Monero. There was a community vote in favor (despite likely ballot stuffing against). All of the major stakeholders at the time agreed with the rename, including TFT.
The code base is still called bitmonero. There is no reason to rename it, though we certainly could have if we really wanted to.
TFT said he he is sentimental about the Bitmonero name, which I can understand, so I don't think there is any malice or harm in him continuing to use it. He just posted the nice hash rate chart on here using the old name. Obviously he understands that they are one and the same coin."
(https://bitcointalk.org/index.php?topic=583449.msg6693615#msg6693615)
[Smooth clears up again the relation with TFT and BMR. Every time he appears it seems to generate confusion on newbies]
-1543: "Pool software is in testing now. You can follow the progress on the pool bounty thread (see original post on this thread for link)."
(https://bitcointalk.org/index.php?topic=583449.msg6698097#msg6698097)
-1545: "[on the tail emission debate] I've been trying to raise awareness of this issue. The typical response seems to be, "when Bitcoin addresses the problem, so will we." To me this means it will never be addressed. The obvious solution is to perpetually increase the money supply, always rewarding miners with new coins.
Tacotime mentioned a hard fork proposal to never let the block reward drop below 1 coin:
Code: if (blockReward < 1){ blockReward = 1; }
I assume this is merely delaying the problem, however. I proposed a fixed annual debasement (say 2%) with a tx fee cap of like 0.001% of the current block reward (or whatever sounds reasonable). That way we still get the spam protection without worrying about fee escalation down the road."
(https://bitcointalk.org/index.php?topic=583449.msg6698879#msg6698879)
[Johnny Mnemonic wants to debate tail emission. Debate is moved to the “Monero Economy” thread]
-1603: “My GOD,the wallet is very very wierd and too complicated to operate, Why dont release a wallet-qt as Bitcoin?”
(https://bitcointalk.org/index.php?topic=583449.msg6707857#msg6707857)
[Newbies have hard times with monero]
-1605: "because this coin is not a bitcoin clone and so there isnt a wallet-qt to just copy and release. There is a bounty for a GUI wallet and there is already an experimental windows wallet..."
(https://bitcointalk.org/index.php?topic=583449.msg6708250#msg6708250)
-1611: "I like this about Monero, but it seems it was written by cryptographers, not programmers. The damned thing doesn't even compile on Arch, and there are several bugs, like command history not working on Linux. The crypto ideas are top-notch, but the implementation is not."
(https://bitcointalk.org/index.php?topic=583449.msg6709002#msg6709002)
[Wolf0, a miner developer, little by little joining the community]
-1888: "http://198.199.79.100 (aka moneropool.org) successfully submitted a block. Miners will be paid for their work once payments start working.
P.S. This is actually our second block today. The first was orphaned. :/"
(https://bitcointalk.org/index.php?topic=583449.msg6753836#msg6753836)
[May 16th: first pool block]
-1927: "Botnets aren't problem now. The main problem is a private hi-performance miner"
(https://bitcointalk.org/index.php?topic=583449.msg6759622#msg6759622)
-1927: "Evidence?"
(https://bitcointalk.org/index.php?topic=583449.msg6759661#msg6759661)
[smooth about the private optimized miner]
-1937: “[reference needed: smooth battling the weak evidence of optimized miner] Yes, I remember that. Some person on the Internet saying that some other unnamed person said he did something hardly constitutes evidence.
I'm not even doubting that optimized asm code could make a big difference. Just not sure how to know whether this is real or not. Rumors and FUD are rampant, so it is just hard to tell."
(https://bitcointalk.org/index.php?topic=583449.msg6760040#msg6760040)
[smooth does not take the "proof" seriously]
-1949: "image
One i5 and One e5 connected to local pool:
image"
(https://bitcointalk.org/index.php?topic=583449.msg6760624#msg6760624)
[proof of optimized miner]
-1953: "lazybear are you interested in a bounty to release the source code (maybe cleaned up a bit?) your optimized miner? If not, I'll probably play around with the code myself tomorrow and see if I can come up with something, or maybe Noodle Doodle will take an interest."
(https://bitcointalk.org/index.php?topic=583449.msg6760699#msg6760699)
[smooth tries to bring lazybear and his optimized miner on board]
-1957: "smooth, NoodleDoodle just said on IRC his latest optimizations are 4x faster on Windows. Untested on Linux so far but he'll push the source to the git repo soon. We'll be at 1 million network hashrate pretty soon."
(https://bitcointalk.org/index.php?topic=583449.msg6760814#msg6760814)
[eizh makes publics NoodleDoodle also has more miner optimizations ready]
-1985: “Someone (not me) created a Monero block explorer and announced it yesterday in a separate thread:
https://bitcointalk.org/index.php?topic=611561.0”
(https://bitcointalk.org/index.php?topic=583449.msg6766206#msg6766206)
[May 16th, 2014: a functional block explorer]
-2018: “Noodle is doing some final tests on Windows and will begin testing on Linux. He expects hashrate should increase across all architectures. I can confirm a 5x increase on an i7 quad-core + Windows 7 64-bit.
Please be patient. These are actual changes to the program, not just a switch that gets flicked on. It needs testing.”
(https://bitcointalk.org/index.php?topic=583449.msg6770093#msg6770093)
[eizh has more info on last miner optimization]
-2023: “Monero marketcap is around $300,000 as of now”
(https://bitcointalk.org/index.php?topic=583449.msg6770365#msg6770365)
-2059: I was skeptical of this conspiracy theory at first but after thinking about the numbers and looking back at the code again, I'm starting to believe it.
These are not deep optimizations, just cleaning up the code to work as intended.
At 100 H/s, with 500k iterations, 70 cycles per L3 memory access, we're now at 3.5 GHz which is reasonably close. So the algorithm is finally memory-bound, as it was originally intended to be. But as delivered by the bytecode developers not even close.
I know this is going to sound like tooting our own horn but this is another example of the kind of dirty tricks you can expect from the 80% premine crowd and the good work being done in the name of the community by the Monero developers.
Assuming they had the reasonable, and not deoptimized, implementation of the algorithm as designed all along (which is likely), the alleged "two year history" of bytecoin was mined on 4-8 PCs. It's really one of the shadiest and sleaziest premines scams yet, though this shouldn't be surprising because in every type of scam, the scams always get sneakier and more deceptive over time (the simple ones no longer work)."
(https://bitcointalk.org/index.php?topic=583449.msg6773168#msg6773168)
[smooth blowing the lid: if miner was so de-optimized, then BCN adoption was even lower than initially thought]
-2123: (https://bitcointalk.org/index.php?topic=583449.msg6781481#msg6781481)
[fluffypony first public post in Monero threads]
-2131: "moneropool.org is up to 2KHs, (average of 26Hs per user). But that's still only 0.3% of the reported network rate of 575Khs.
So either a large botnet is mining, or someone's sitting quietly on a much more efficient miner and raking in MRO."
(https://bitcointalk.org/index.php?topic=583449.msg6782192#msg6782192)
[with pools users start to notice that “avg” users account for a very small % of the network hashrate, either botnets or a super-optimized miner is mining monero]
-2137: “I figure its either:
(https://bitcointalk.org/index.php?topic=583449.msg6782852#msg6782852)
-2192: “New source (0.8.8.1) is up with optimizations in the hashing. Hashrate should go up ~4x or so, but may have CPU architecture dependence. Windows binaries are up as well for both 64-bit and 32-bit."
(https://bitcointalk.org/index.php?topic=583449.msg6788812#msg6788812)
[eizh makes official announce of last miner optimization, it is may 17th]
-2219: (https://bitcointalk.org/index.php?topic=583449.msg6792038#msg6792038)
[wolf0 is part of the monero community for a while, discussing several topics as botnet mining and miner optimizations. Now spots security flaws in the just launched pools]
-2301: "5x optimized miner released, network hashrate decreases by 10% Make your own conclusions. :|"
(https://bitcointalk.org/index.php?topic=583449.msg6806946#msg6806946)
-2323: "Monero is on Poloniex https://poloniex.com/exchange/btc_mro"
(https://bitcointalk.org/index.php?topic=583449.msg6808548#msg6808548)
-2747: "Monero is holding a $500 logo contest on 99designs.com now: https://99designs.com/logo-design/contests/monero-mro-cryptocurrency-logo-design-contest-382486"
(https://bitcointalk.org/index.php?topic=583449.msg6829109#msg6829109)
-2756: “So... ALL Pools have 50KH/s COMBINED.
Yet, network hash is 20x more. Am i the only one who thinks that some people are insta mining with prepared faster miners?”
(https://bitcointalk.org/index.php?topic=583449.msg6829977#msg6829977)
-2757: “Pools aren't stable yet. They are more inefficient than solo mining at the moment. They were just released. 10x optimizations have already been released since launch, I doubt there is much more optimization left.”
(https://bitcointalk.org/index.php?topic=583449.msg6830012#msg6830012)
-2765: “Penalty for too large block size is disastrous in the long run.
Once MRO value increases a lot, block penalties will become more critical of an issue. Pools will fix this issue by placing a limit on number and size of transactions. Transaction fees will go up, because the pools will naturally accept the most profitable transactions. It will become very expensive to send with more than 0 mixin. Anonymity benefits of ring signatures are lost, and the currency becomes unusable for normal transactions.”
(https://bitcointalk.org/index.php?topic=583449.msg6830475#msg6830475)
-2773: "The CryptoNote developers didn't want blocks getting very large without genuine need for it because it permits a malicious attack. So miners out of self-interest would deliberately restrict the size, forcing the network to operate at the edge of the penalty-free size limit but not exceed it. The maximum block size is a moving average so over time it would grow to accommodate organic volume increase and the issue goes away. This system is most broken when volume suddenly spikes."
(https://bitcointalk.org/index.php?topic=583449.msg6830710#msg6830710)
-3035: "We've contributed a massive amount to the infrastructure of the coin so far, enough to get recognition from cryptonote, including optimizing their hashing algorithm by an order of magnitude, creating open source pool software, and pushing several commits correcting issues with the coin that eventually were merged into the ByteCoin master. We also assisted some exchange operators in helping to support the coin.
To say that has no value is a bit silly... We've been working alongside the ByteCoin devs to improve both coins substantially."
(https://bitcointalk.org/index.php?topic=583449.msg6845545#msg6845545)
[tacotime defends the Monero team and community of accusations of just “ripping-off” others hard-work and “steal” their project]
-3044: "image"
(https://bitcointalk.org/index.php?topic=583449.msg6845986#msg6845986)
[Monero added to coinmarketcap may 21st 2014]
-3059: "You have no idea how influential you have been to the success of this coin. You are a great ambassador for MRO and one of the reasons why I chose to mine MRO during the early days (and I still do, but alas no soup for about 5 days now)."
(https://bitcointalk.org/index.php?topic=583449.msg6846509#msg6846509)
[random user thanks smooth CONSTANT presence, and collaboration. It is not all FUD ;)]
-3068: "You are a little too caught up in the mindset of altcoin marketing wars about "unique features" and "the team" behind the latest pump and dump scam.
In fact this coin is really little more than BCN without the premine. "The team" is anyone who contributes code, which includes anyone contributing code to the BCN repository, because that will get merged as well (and vice-versa).
Focus on the technology (by all accounts amazing) and the fact that it was launched in a clean way without 80% of the total world supply of the coin getting hidden away "somewhere." That is the unique proposition here. There also happens to be a very good team behind the coin, but anyone trying too hard to market on the basis of some "special" features, team, or developer is selling you something. Hold on to your wallet."
(https://bitcointalk.org/index.php?topic=583449.msg6846638#msg6846638)
[An answer to those trolls saying Monero has no innovation/unique feature]
-3070: "Personally I found it refreshing that Monero took off WITHOUT a logo or a gui wallet, it means the team wasn't hyping a slick marketing package and is concentrating on the coin/note itself."
(https://bitcointalk.org/index.php?topic=583449.msg6846676#msg6846676)
-3119: “image
[included for the lulz]
-3101: "[…]The main developers are tacotime, smooth, NoodleDoodle. Some needs are being contracted out, including zone117x, LucasJones, and archit for the pool, another person for a Qt GUI, and another person independently looking at the code for bugs."
(https://bitcointalk.org/index.php?topic=583449.msg6848006#msg6848006)
[the initial "core team" so far, eizh post]
-3123: (https://bitcointalk.org/index.php?topic=583449.msg6850085#msg6850085)
[fluffy steps-in with an interesting dense post. Don’t dare to skip it, worthwhile reading]
-3127: (https://bitcointalk.org/index.php?topic=583449.msg6850526#msg6850526)
[fluffy again, worth to read it too, so follow link, don’t be lazy]
-3194: "Hi guys - thanks to lots of hard work we have added AES-NI support to the slow_hash function. If you're using an AES-NI processor you should see a speed-up of about 30%.”
(https://bitcointalk.org/index.php?topic=583449.msg6857197#msg6857197)
[flufflypony is now pretty active in the xmr topic and announces a new optimization to the crippled miner]
-3202: "Whether using pools or not, this coin has a lot of orphaned blocks. When the original fork was done, several of us advised against 60 second blocks, but the warnings were not heeded.
I'm hopeful we can eventually make a change to more sane 2- or 2.5-minute blocks which should drastically reduce orphans, but that will require a hard fork, so not that easy."
(https://bitcointalk.org/index.php?topic=583449.msg6857796#msg6857796)
[smooth takes the opportunity to remember the need of bigger target block]
-3227: “Okay, optimized miner seems to be working: https://bitcointalk.org/index.php?topic=619373”
[wolf0 makes public his open source optimized miner]
-3235: "Smooth, I agree block time needs to go back to 2 minutes or higher. I think this and other changes discussed (https://bitcointalk.org/index.php?topic=597878.msg6701490#msg6701490) should be rolled into a single hard fork and bundled with a beautiful GUI wallet and mining tools."
(https://bitcointalk.org/index.php?topic=583449.msg6861193#msg6861193)
[tail emission, block target and block size are discussed in the next few messages among smooth, johnny and others. If you want to know further about their opinions/reasonings go and read it]
-3268: (https://bitcointalk.org/index.php?topic=583449.msg6862693#msg6862693)
[fluffy dares another user to bet 5 btc that in one year monero will be over dash in market cap. A bet that he would have lost as you can see here https://coinmarketcap.com/historical/20150524/ even excluding the 2M “instamined” coins]
-3283: "Most of the previous "CPU only" coins are really scams and the developers already have GPU miner or know how to write one. There are a very few exceptions, almost certainly including this one.
I don't expect a really dominant GPU miner any time soon, maybe ever. GPUs are just computers though, so it is certainly possible to mine this on a GPU, and there probably will be a some GPU miner, but won't be so much faster as to put small scale CPU miners out of business (probably -- absent some unknown algorithmic flaw).
Everyone focuses on botnets because it has been so long since regular users were able to effectively mine a coin (due to every coin rapidly going high end GPU and ASIC) that the idea that "users" could vastly outnumber "miners" (botnet or otherwise) isn't even on the radar.
The vision here is a wallet that asks you when you want to install: "Do you want to devote some of you CPU power to help secure the network. You will be eligible to receive free coins as a reward (recommended) [check box]." Get millions of users doing that and it will drive down the value of mining to where neither botnets nor professional/industrial miners will bother, and Satoshi's original vision of a true p2p currency will be realized.
That's what cryptonote wants to accomplish with this whole "egalitarian mining" concept. Whether it succeeds I don't know but we should give it a chance. Those cryptonote guys seem pretty smart. They've probably thought this through better than any of us have."
(https://bitcointalk.org/index.php?topic=583449.msg6863720#msg6863720)
[smooth vision of a true p2p currency]
-3318: "I have a screen shot that was PMed to me by someone who paid a lot of money for a lot of servers to mine this coin. He won't be outed by me ever but he does in fact exist. Truth."
(https://bitcointalk.org/index.php?topic=583449.msg6865061#msg6865061)
[smooth somehow implies it is not botnets but an individual or a group of them renting huge cloud instances]
-3442: "I'm happy to report we've successfully cracked Darkcoin's network with our new quantum computers that just arrived from BFL, a mere two weeks after we ordered them."
[fluffy-troll]
-3481: “Their slogan is, "Orphaned Blocks, Bloated Blockchain, that's how we do""
(https://bitcointalk.org/index.php?topic=583449.msg6878244#msg6878244)
[Major FUD troll in the topic. One of the hardest I’ve ever seen]
-3571: "Tacotime wanted the thread name and OP to use the word privacy instead of anonymity, but I made the change for marketing reasons. Other coins do use the word anonymous improperly, so we too have to play the marketing game. Most users will not bother looking at details to see which actually has more privacy; they'll assume anonymity > privacy. In a world with finite population, there's no such thing as anonymity. You're always "1 of N" possible participants.
Zero knowledge gives N -> everyone using the currency, ring signatures give N -> your choice, and CoinJoin gives N -> people who happen to be spending around the same amount of money as you at around the same time. This is actually the critical weakness of CoinJoin: the anonymity set is small and it's fairly susceptible to blockchain analysis. Its main advantage is that you can stick to Bitcoin without hard forking.
Another calculated marketing decision: I made most of the OP about ring signatures. In reality, stealth addressing (i.e. one-time public keys) already provides you with 90% of the privacy you need. Ring signatures are more of a trump card that cannot be broken. But Bitcoin already has manual stealth addressing so the distinguishing technological factor in CryptoNote is the use of ring signatures.
This is why I think having a coin based on CoinJoin is silly: Bitcoin already has some privacy if you care enough. A separate currency needs to go way beyond mediocre privacy improvements and provide true indistinguishably. This is true thanks to ring signatures: you can never break the 1/N probability of guessing correctly. There's no additional circumstantial evidence like with CoinJoin (save for IP addresses, but that's a problem independent of cryptocurrencies)."
(https://bitcointalk.org/index.php?topic=583449.msg6883525#msg6883525)
[Anonymity discussions, specially comparing Monero with Darkcoin and its coinjoin-based solution, keep going on]
-3593: "Transaction fees should be a fixed percentage of the block reward, or at the very least not be controllable by the payer. If payers can optionally pay more then it opens the door for miner discrimination and tx fee bidding wars."
(https://bitcointalk.org/index.php?topic=583449.msg6886770#msg6886770)
[Johnny Mnemonic is a firm defender of fixed fees and tail emission: he see the “fee market” as big danger to the usability of cryptocurrencies]
-3986: (https://bitcointalk.org/index.php?topic=583449.msg6930412#msg6930412)
[partnership with i2p]
-4373: “Way, way faster version of cpuminer: https://bitcointalk.org/index.php?topic=619373”
(https://bitcointalk.org/index.php?topic=583449.msg6993812#msg6993812)
[super-optimized miner is finally leaked to the public. Now the hashrate is 100 times bigger than originally with crippled miner. The next hedge for "cloud farmers" is GPU mining]
-4877: “1. We have a logo! If you use Monero in any of your projects, you can grab a branding pack here. You can also see it in all its glory right here:
logo […] 4. In order to maintain ISO 4217 compliance, we are changing our ticker symbol from MRO to XMR effective immediately."
(https://bitcointalk.org/index.php?topic=583449.msg7098497#msg7098497)
[Jun 2nd 2014]
-5079: “First GPU miner: https://bitcointalk.org/index.php?topic=638915.0”
(https://bitcointalk.org/index.php?topic=583449.msg7130160#msg7130160)
[4th June: Claymore has developed the first CryptoNight open source and publicly available GPU miner]
-5454: "New update to my miner - up to 25% hash increase. Comment and tell me how much of an increase you got from it: https://bitcointalk.org/index.php?topic=632724"
(https://bitcointalk.org/index.php?topic=583449.msg7198061#msg7198061)
[miner optimization is an endless task]
-5464: "I have posted a proposal for fixed subsidy:
https://bitcointalk.org/index.php?topic=597878.msg7202538#msg7202538"
(https://bitcointalk.org/index.php?topic=583449.msg7202776#msg7202776)
[Nice charts and discussion proposed by tacotime, worth reading it]
-5658: "- New seed nodes added. - Electrum-style deterministic wallets have been added to help in the recovery of your wallet should you ever need to. It is enabled by default."
(https://bitcointalk.org/index.php?topic=583449.msg7234475#msg7234475)
[Now you can recover your wallet with a 24 word seed]
-5726: (https://bitcointalk.org/index.php?topic=583449.msg7240623#msg7240623)
[Bitcoin Pizza in monero version: a 2500 XMR picture sale (today worth ~$20k)]
-6905: (https://bitcointalk.org/index.php?topic=583449.msg7386715#msg7386715)
[Monero missives: CryptoNote peer review starts whitepaper reviewed)]
-7328: (https://bitcointalk.org/index.php?topic=583449.msg7438333#msg7438333)
[android monero widget built]
This is a dense digest of the first several thousand messages on the definitive Monero thread.
A lot of things happened in this stressful days and most are recorded here. It can be summarized in this:
  • 28th April: Othe and zone117x assume the GUI wallet and CN pools tasks.
  • 30th April: First NoodleDoodle's miner optimization.
  • 11th May: First Monero exchanger
  • 13th May: Open source pool code is ready.
  • 16th May: First pool mined block.
  • 19th May: Monero in poloniex
  • 20th May: Monero +1100 bitcoin 24h trading volume in Poloniex.
  • 21st May: New official miner optimization x4 speed (accumulated optimization x12-x16). Open source wolf0's CPU miner released.
  • 25th May: partnership with i2p
  • 28th May: The legendary super-optimized miner is leaked. Currently running x90 original speed. Hedge of the "cloud farmers" is over in the cpu mining.
  • 2nd June: Monero at last has a logo. Ticker symbol changes to the definitive XMR (former MRO)
  • 4th June: Claymore's open source GPU miner.
  • 10th June: Monero's "10,000 bitcoin pizza" (2500 XMR paintig). Deterministic seed-based wallets (recover wallet with a 24 word seed)
  • March 2015 – tail emission added to code
  • March 2016 – monero hard forks to 2 min block and doubles block reward
There basically two things in here that can be used to attack Monero:
  • Crippled miner Gave unfair advantage to those brave enough to risk money and time to optimize and mine Monero.
  • Fast curve emission non-bitcoin-like curve as initially advertised and as it was widely accepted as suitable
Though we have to say two things to support current Monero community and devs:
  • The crippled miner was coded either by Bytecoin or CryptoNote, and 100% solved within a month by Monero community
  • The fast curve emission was a TFT miscalculation. He forgot to consider that as he was halving the block target he was unintentionally doubling the emission rate.
submitted by el_hispano to Monero [link] [comments]

Bitcoin nasıl alınır? Bitcoin ve altcoin satın alma - YouTube How Much Can You Make Mining Bitcoin With 6X 1080 Ti ... paranın yeniden keşfi: bitcoin - YouTube Bitcoin 80% Crash after the Halving! 80 Trillion Dollar Bitcoin Exit Plan - YouTube

Ein Blockheader ohne Transaktionen würde ungefähr 80 Bytes umfassen. Wenn wir annehmen, dass Blöcke alle 10 Minuten erzeugt werden, 80 Bytes * 6 * 24 * 365 = 4,2 MB pro Jahr. Mit Computersystemen, die typischerweise ab 2008 mit 2 GB RAM verkauft werden, und dem Mooreschen Gesetz, das ein aktuelles Wachstum von 1,2 GB pro Jahr vorhersagt ... Block to Megabyte (10^6 bytes) Megabyte (10^6 bytes) to Block: 1 Block = 3.814697265625E-6 Gigabit [Gb] Block to Gigabit: Gigabit to Block: 1 Block = 4.7683715820311E-7 Gigabyte [GB] Block to Gigabyte: Gigabyte to Block: 1 Block = 5.12E-7 Gigabyte (10^9 bytes) Block to Gigabyte (10^9 bytes) Gigabyte (10^9 bytes) to Block For many Bitcoin machines online rates are available. Producers. Genesis Coin (4080) General Bytes (3362) BitAccess (1106) Coinsource (731) Lamassu (543) All producers; Countries. United States (9178) Canada (920) United Kingdom (286) Austria (148) Spain (104) All countries; More. Find bitcoin ATM near me; Submit new ATM ; Submit business to host ATM; Android app; iOS app; Charts; Remittance ... EUR/GBP: Aktueller Euro - Britische Pfund Kurs heute mit Chart, historischen Kursen und Nachrichten. Wechselkurs EUR in GBP. If the such an attack is executed, every node in the network will start trashing after 1.5 hours approximately, as Bitcoin application will be using more than 1.5 GB of RAM. After 2 hours, every node running in Windows 32-bits will halt, since the address space of 2 Gb will be filled. After 8.7 hours, every 64-bit node still alive will fill both the 2 Gb of RAM and the 4 gigabytes of hard disk ...

[index] [36212] [13733] [29449] [44008] [5550] [13070] [49910] [6358] [41704] [24388]

Bitcoin nasıl alınır? Bitcoin ve altcoin satın alma - YouTube

Today I show you how to mine the worlds #1 cryptocurrency - Bitcoin. Mining bitcoin is actually incredibly easy. This tutorial will tell you everything there... After the first Bitcoin Halving in November 2012 the price of Bitcoin crashed more than 80% a couple months later. How likely is such a Bitcoin dump after th... Unless you have over 1 BTC to trade, never send Bitcoin to an exchange!!! Why? You'll be charged an incredibly high fee because the Bitcoin network is so overworked at the moment. Why? Bitcoin - 80 Trillion Dollar Exit. I talk about how Bitcoin will eventually become an exit ramp from the crashing 80 trillion dollar financial system, the ec... gerçek kimliği hala bilinmeyen satoshi nakamoto, bundan 10 yıl önce, var olan finansal düzenin tüm açıklarını kapatma iddiasındaki yepyeni bir sistemin ilk a...

#