I’ve previously written about Kodachi. The author is now working on a new version, and it doesn’t look any better. Let’s have a look for instance on the proposed authentication protocol.
- It’s unclear where it will be used.
- It’s unclear what it protects.
- It’s unclear whom it authenticates.
- Authentication Flow:
- When a client attempts to authenticate, it contacts the server.
- The server generates a cryptographically secure 256-bit (32-byte) random value using
random_bytes()
.- This value is encoded as a 64-character hexadecimal string for transmission.
- A unique CSRF token is attached with the challenge to prevent replay attacks.
- Client-Side Processing:
- The client reverses the string and applies a SHA-256 hash to solve the challenge.
- Server-Side Verification:
- The server validates the solution and, upon success, issues a session token.
- Security Benefit:
- No passwords are ever transmitted or stored, significantly reducing risk.
Good luck in making sense of this. And in addition, there’s a lot of unanswered questions:
- How does the client authenticate the server?
- What is the purpose of the random 32-byte nonce?
- Why is a separate CSRF token needed in addition to the nonce?
- How does applying a hash to a nonce solve a challenge? What challenge?
- What solution does the server validate?
In short, I don’t see any authentication happening as part of this protocol, and I don’t see how it can be secure either. And why not transmit passwords? TLS solved this problem, and if the server is untrusted to handle a password, it’s untrusted to handle other secrets as well.
And why re-invent the wheel? There’s multiple protocols for authentication that doesn’t reveal a client side secret to the server, but proves client knowledge of the secret. TLS and SSH authentication are two such mechanisms. Schneier’s law is ever relevant:
Anyone, from the most clueless amateur to the best cryptographer, can create an algorithm that he himself can’t break. It’s not even hard. What is hard is creating an algorithm that no one else can break, even after years of analysis. And the only way to prove that is to subject the algorithm to years of analysis by the best cryptographers around.
The author of Kodachi has no credentials to go by; no published works on security, no known education on the topic, and no known prior work on authentication. There’s overwhelming chances of a broken authentication scheme. All this follows my prior criticism; there’s simply no reason to trust that Kodachi is secure. From what I’ve seen the author has no training in either software development or security. The way git is used is another clear indication of this lack of experience.
Note that I’m not claiming that I can do better. I probably can’t – but I’m not attempting. I’m aware of how difficult it is, and abstain from doing so.
But it doesn’t end there. There’s other gems in the design document:
The approach to security extends beyond authentication, incorporating continuous monitoring:
- Network Monitoring:
- Tools like iftop and nethogs are used to detect unusual traffic patterns.
- Process Monitoring:
- Applications such as btop and htop help identify suspicious resource usage.
- Port & Bandwidth Tracking:
- Port monitoring and vnstat are employed to reveal unexpected connections and anomalous data transfers.
- Integrity of Running Processes:
- Signature verification ensures only authorized code is executed.
- Real-Time Alerts:
- Predefined threat patterns trigger immediate security alerts.
- How can a tool such as
iftop
andnethogs
detect unusual traffic patterns? They’re made for discovery overall network bandwidth use or the biggest network users. A crucial leak may be hundreds of bytes per second, and won’t show up in such tools. A real IDS would be better. Again; proposing such tools shows lack of understanding. btop
andhtop
can help – but it requires intimate knowledge of what’s normal. Normal users will probably not be in that position.- What mechanism is used to define unexpected connection and anomalous transfers?
- How is signatures verified? What mechanism stops unsigned code?
- What patterns?
Furthermore, the present version, 8.27, is based on Ubuntu 18.04. It was released in February 2023. Ubuntu 18.04 was end of life in May 2023. Thus, Kodachi is recommending people to use software that has not been maintained for two years, and is no longer actively tracked. There is vulnerabilities in 18.04, but no-one tracks or patches them anymore. For a security focused product to not patch software for over two years is a deadly sin, and illustrates the problems of single developer when it comes to projects as complex as releasing a Linux distro.
Distros such as Debian and Ubuntu have teams of multiple people doing security patching and reviews – but more importantly: they have infrastructure for handling it, such as automated CI/CD systems that builds and distributes new packages once patches are approved. That infrastructure is a force multiplier; the developers no longer need to spend time on manually building stuff.
In short – the responsible thing would be to no longer offer an insecure product.
I’ve tried asking on the discord used by the project, but so far the only reply has been ad hominem.