The concept of having no central entity control currency seems attractive to a lot of people. However, trust in some decentralised entity means trusting the way that they operate their business - how they’re able to audit their code, and write bug-free software. The only thing I know about software is that it isn’t ever bug-free. Code that has small numbers of bugs is widely deployed, well tested, and reviewed in detail.
Given the importance of various software to the financials that surround crypto currencies, the latest issue with a cryptocurrency/smart wallet I saw, made me wonder about their testing. How many bugs are they looking for, in terms of both their standard operating environment, as well as for security? What is the testing and review approach that they’re adopting?
First, let’s benchmark a project that gets c.91% code coverage that I work on almost daily. ygot has (excluding demos, and generated code), 22,380 LOC. Of these 13,928 are test code. To get 91% coverage of the real functions in ygot, we need 62% of the codebase to be test code. In this testing, we’re not specifically looking for security issues, but just edge cases - and normal operation of the code. Checking the functionality rather than the security. For projects that use ygot, we have more integration test code, and then testing on top of this (i.e., these unit tests and end-to-end code generation tests are not our entire test surface for an application using ygot structures).
Looking at the repo that Tether links in their above post, they have 111,376 LOC at the time that I looked. The tests seem to be constrained to the test directory. and make up 13,430 LOC. This means that they’re writing 12% of their code to be test code.
Now, since the two projects don’t do anything close to the same thing, it’s probably unreasonable to make a direct comparison. However, it would seem to me that more testing would be expected - which pervasively covers the entire code surface. With the complexity of code being able to be embedded in a wallet (i.e., a smart contract), then this isn’t a trivial task - since one needs to understand how that arbitrary (user-supplied) code interacts with the overall platform. Given that the Tether application is open source, this also means that someone malicious can inspect the code for vulnerabilities – one would think this is a further incentive to test well.
One central entity might not be trustworthy, but it’s likely to be called upon to be reliable. Even regulated. Distributed controllers of a system means you really must trust these entities not to have flaws that devalue your investments. Whether you do or not is your choice, but I’m certainly skeptical about giving money over to decentralised entities that don’t really seem to take a robust approach to software testing and quality.