The closer AI is to human intelligence, the more non-human defense systems are needed.
Author: 0xResearcher
Manus achieved SOTA (State-of-the-Art) results in the GAIA benchmark test, showing that its performance surpasses Open AI’s large-scale model at the same level. In other words, it can independently complete complex tasks, such as cross-border business negotiations, which involve breaking down contract terms, predicting strategies, generating plans, and even coordinating legal and financial teams. Compared with traditional systems, Manus ‘advantages lie in its dynamic goal disassembly capabilities, cross-modal reasoning capabilities, and memory-enhanced learning capabilities. It can decompose large tasks into hundreds of executable subtasks, process multiple types of data at the same time, and use reinforcement learning to continuously improve its decision-making efficiency and reduce error rates.
While marveled at the rapid development of technology, Manus once again triggered differences in the circle on the evolution path of AI: Will AGI dominate the world in the future or will MAS collaborate to lead the world?
This starts with Manus’s design philosophy, which implies two possibilities:
One is the AGI path. By continuously improving the level of individual intelligence, it will approach human comprehensive decision-making capabilities.
There is also the MAS path. As a super coordinator, he directs thousands of vertical domain agents to operate in coordination.
On the surface, we are discussing different path differences, but in fact we are discussing the underlying contradiction in AI development: how should efficiency and security be balanced? When a single intelligence is closer to the AGI, the higher the risk of black-box decisions; while multi-agent collaboration can disperse risks, it may miss key decision windows due to communication delays.
The evolution of Manus has invisibly amplified the inherent risks of AI development. For example, data privacy black holes: In medical scenarios, Manus needs real-time access to patient genomic data; during financial negotiations, undisclosed financial information may be touched; for example, the algorithmic bias trap, in which Manus gives candidates of specific ethnic groups below average salary recommendations; during legal contract reviews, the misjudgment rate of emerging industry terms is nearly half. Another example is to combat attack vulnerabilities. Hackers implanted specific voice frequencies to make Manus misjudge the opponent’s bid range during negotiations.
We have to face a terrible pain point with AI systems: the smarter the system, the wider the attack surface.
However, security is a term that has been constantly mentioned in web3, and a variety of encryption methods have also emerged under the framework of God V’s Impossible Triangle (blockchain networks cannot achieve security, decentralization and scalability at the same time):
- Zero Trust Security Model: The core concept of the zero-trust security model is “Don’t trust anyone, always verify”, which means that whether a device is located on an internal network or not, it should not be trusted by default. This model emphasizes strict authentication and authorization for each access request to ensure system security.
- Decentralized Identity (DID): DIDIs a set of identifier standards that enable entities to be identified in a verifiable and persistent manner without the need for a centralized registry. This enables a new decentralized digital identity model, often compared with self-sovereign identities, and is an important part of Web 3.
- Fully Homomorphic Encryption (FHE)Is an advanced encryption technology that allows arbitrary calculations to be performed on encrypted data without decrypting the data. This means that a third party can operate on ciphertext, and the result is consistent with the result of the same operation on plaintext after decryption. This feature is of great significance for scenarios that require calculations without exposing raw data, such as cloud computing and data outsourcing.
The zero-trust security model and DID have a certain number of projects to tackle in multiple bull markets. They have either achieved success or been submerged in the encryption wave. As the youngest encryption method: Fully Homomorphic Encryption (FHE) is also a big killer for solving security problems in the AI era. Fully Homomorphic Encryption (FHE) is a technique that allows calculations to be made on encrypted data.
How to solve it?
First, the data level. All information entered by the user (including biometrics, voice intonation) is processed in an encrypted state, and even Manus himself cannot decrypt the original data. For example, in medical diagnosis cases, patient genomic data is analyzed in ciphertext throughout the entire process to avoid the disclosure of biological information.
algorithmic level. The “cryptographic model training” achieved through FHE makes it impossible for even developers to spy on the AI’s decision-making path.
At the collaborative level. The communication of multiple agents uses threshold encryption, and a breach of a single node will not lead to global data leakage. Even in supply chain attack and defense drills, an attacker cannot obtain a complete business view after infiltrating multiple Agents.
Due to technical limitations, web3 security may not be directly related to most users, but it has countless indirect interests. In this dark forest, if you do not try your best to arm yourself, you will never escape your “leek” status. day.
- uPortPublished on the Ethereum main website in 2017, it may be the earliest decentralized identity (DID) project published on the main website.
- In terms of the zero trust security model,NKNIts main network was released in 2019.
- Mind Network It was the first FHE project to be launched on the main network, and was the first to adopt cooperation with ZAMA, Google, DeepSeek, etc.
uPort and NKN are projects that Xiaobian has never heard of. It seems that security projects are really not paid attention to by speculators. Whether Mind network can escape this curse and become a leader in the security field remains to be seen.
The future has come. The closer AI is to human intelligence, the more non-human defense systems are needed. The value of FHE not only lies in solving current problems, but also paving the way for the era of strong AI. On this steep road to AGI, FHE is not an option, but a necessity for survival.