DFINITY(DFN)よくある質問 日本語訳



Frequently Asked Questions よくある質問




What is DFINITY?

DFINITY is a public network of client computers providing a “decentralized world compute cloud” where software can be installed and run with all the usual benefits expected of “smart contract” systems hosted on a traditional blockchain. The underlying technology is also designed to support highly resilient tamperproof private clouds that provide the added benefit that hosted software can call into smart contracts on the public cloud.

DFINITY is an Ethereum-family technology and is fully compatible with the public Ethereum network – if you can run a Dapp on Ethereum, you can run it on DFINITY too. There exist several fundamental differences between the networks however, and they are really sister systems offering different things. DFINITY introduces new crypto:3 protocols and techniques that aim to deliver extreme performance, unlimited scalability, interoperability and other benefits. Another difference is that whereas in Ethereum “The Code is Law”, DFINITY introduces governance by a decentralized intelligence called the Blockchain Nervous System. These differences involve tradeoffs, and DFINITY is best understood as an exciting new extension of the Ethereum ecosystem that will make it much, much stronger.


DFINITYはEthereumファミリのテクノロジーで、Ethereumネットワークとの完全な互換性があります.Def on Ethereumを実行することができれば、DFINITYでも実行できます。ネットワーク間にはいくつかの基本的な違いがありますが、実際には異なるものを提供する姉妹システムです。DFINITYは、極端なパフォーマンス、無限のスケーラビリティ、相互運用性などのメリットをもたらすことを目指す新しい暗号:3プロトコルと技術を導入しました。もう一つの違いは、Ethereumの “The Code is Law”ではDFINITYがBlockchain Nervous Systemと呼ばれる地方分権化された知性によってガバナンスを導入している点です。これらの相違にはトレードオフがあり、DEFINITYはEthereumエコシステムのエキサイティングな新しい拡張として最もよく理解されています。


When will DFINITY be live?

Technology components used by the DFINITY project are being released into the public domain for those interested in decentralized technology (for example, software related to a crucial technique known as Threshold Relay is already in the public domain). A beta network created using the “Copper Release” client software is expected towards the end of Q1 2018. The expectation is that the Copper network will launch end Q2 2018. A supporting foundation, DFINITY Stiftung, has been created in Zug, and will assist with work.

DFINITYプロジェクトで使用される技術コンポーネントは、分散テクノロジーに関心を持つ人々(例えば、閾値リレーと呼ばれる重要な技術に関連するソフトウェアはすでにパブリックドメインになっています)のためにパブリックドメインにリリースされています。「Copper Release」クライアントソフトウェアを使用して作成されたベータネットワークは、2018年第1四半期末に予定されています。銅ネットワークは2018年第2四半期に開始する予定です.Zugでサポート基盤DFINITY Stiftungが作成され、仕事で。


Is DFINITY an Ethereum competitor?

DFINITY is conceived as an extension of the Ethereum ecosystem – a sister world computer network that prioritizes performance and scalability and where smart contracts are subject to a decentralized intelligence, which is very different to “The Code is Law”. This will bring people into the ecosystem that have different needs that performance and decentralized governance by a distributed AI can solve. Of course, the features we provide involve some design tradeoffs. The ecosystem will be broader and more attractive because users can choose whatever suits them better. All our engineers and researchers care deeply about Ethereum and open source. We will maintain the maximum possible level of compatibility and Stiftung DFINITY will contribute funding and effort to Ethereum projects.

DFINITYは、パフォーマンスとスケーラビリティを優先する姉妹世界のコンピュータネットワークであり、スマートコントラクトが分散型インテリジェンスの対象となるEthereumエコシステムの拡張として考えられています。これは「The Code is Law」とはまったく異なります。これにより、分散したAIのパフォーマンスと分散型ガバナンスが解決できる、さまざまなニーズを持つ人々が生態系に持ち込まれます。もちろん、私たちが提供する機能にはデザイン上のトレードオフが含まれています。エコシステムはより広範で魅力的なものになるでしょう。私たちのエンジニアと研究者はすべて、Ethereumとオープンソースについて深く心配しています。可能な限り高いレベルの互換性を維持し、Stiftung DFINITYはEthereumプロジェクトに資金と労力を提供します。


How does DFINITY strengthen Ethereum?

This is not a zero sum game. Right now numerous decentralized platforms are vying for dominance. The Ethereum ecosystem can win by eschewing monoculture. For example, during the 1990s many different hardware platforms vied for dominance. These included the PowerPC, SPARC and 8086 family architectures. In the end 8086 won largely because it was a more diverse ecosystem that provided more options. Dell, HP and many other operations became much bigger individually than dominant/monopoly operations on the others, which eventually disappeared or switched to 8086. This is the vision we have for the EVM (“Ethereum Virtual Machine”) and systems that run on it. We believe that DFINITY will drive the value of the Ethereum network upwards.

これはゼロサムゲームではありません。現在、多数の分散プラットフォームが支配的に競争しています。エテレアムの生態系は、単一文化を避けることで勝つことができます。たとえば、1990年代には、さまざまなハードウェアプラットフォームが支配的に競争していました。これらには、PowerPC、SPARC、および8086ファミリのアーキテクチャが含まれています。8086年の終わりには、より多くの選択肢を提供するより多様な生態系であったことが主な理由となりました。Dell、HP、その他の多くの事業は、他社の独占/独占事業よりもはるかに大きくなり、最終的には消えたり、8086に切り替わりました。これはEVM( “Ethereum Virtual Machine”)とそれを実行するシステム。私たちは、DFINITYがEthereumネットワークの価値を上向させると信じています。


Why introduce governance by a distributed AI?

DFINITY’s Blockchain Nervous System (or “BNS”) solves a bunch of critical problems for certain people, and it will also allow us to accelerate development far beyond what limitations of current architectures allow. With respect to business, many organizations cannot easily move significant systems and assets onto the decentralized cloud when, if their systems deadlock or they are hacked, “The Code is Law” approach prevents them finding a solution. In many impactful potential consumer applications, arguably it is also unfair to hold users responsible for flaws in smart contract code they cannot read themselves. The BNS can address and often resolve such situations by executing arbitrary privileged code. Generally speaking, the BNS will tend to make decisons that maximize the value of “dfinities” and we expect this will result in it maintaining an initial “genesis” constitution that declares systems whose primary purpose is vice or violence should be frozen since this will make its appeal broadest. There is no concept of a hardfork – traditional client software client such as geth or parity (the two main Ethereum clients) is wrapped in a proxy that is BNS aware. It can continuously upgrade the inner client without interrupting dependent applications and users.

DFINITYのBlockchain Nervous System(または “BNS”)は、特定の人にとって重大な問題を解決し、現在のアーキテクチャの限界が許す範囲をはるかに超えて開発を加速することも可能にします。ビジネスに関しては、大規模なシステムや資産を分散クラウドに簡単に移行することはできません。システムがデッドロックしたり、ハッキングされた場合、「コードは法律」のアプローチによって解決策を見つけることができなくなります。多くのインパクトのある潜在的な消費者向けアプリケーションでは、ユーザーが自分自身で読むことができないスマートな契約コードで欠陥の責任を負うことも不公平です。BNSは、任意の特権コードを実行することによって、このような状況に対処し、解決することができます。一般に言えば、BNSは「デフィニティ」の価値を最大化する決定を下す傾向があり、これにより、主な目的が副産物であるか、暴力が凍結されるべきであると宣言する当初の「起源」憲章が維持されることになると期待しています。hardforkというコンセプトはありません.gethやparity(2つの主要なEthereumクライアント)などの従来のクライアントソフトウェアクライアントは、BNS対応のプロキシにラップされています。従属アプリケーションとユーザーを中断することなく、内部クライアントを継続的にアップグレードできます。


Are you serious about hosting decentralized versions of massive online services like Gmail?

Yes. DFINITY protocol research began with the assumption that the network must contain a million or more mining computers, and that together these will be required to provide a massive virtual compute capacity (i.e. scale out). Research objectives also include considerations regarding how the network can meet different kinds of computational requirements. For example, Web search is in many ways very suited to vast decentralized networks, but the need for results to be returned quickly requires engineers are given flexibility in how they schedule validated computations. These kinds of considerations are baked into DFINITY’s thinking.



Where does the inspiration for DFINITY come from?

Unsurprisingly, DFINITY can be traced back to cypherpunk and decentralization thinking, but there are some twists. Back in 1999 Dominic Williams was using Wei Dai’s crypto++ library and came across his bMoney proposal. The idea struck him as important, although he was consumed with working on a Dot Com era technology and had no time to follow up. He independently developed deep interests in distributed computing and scalability – he launched an MMO game in 2010 that grew to 3MM users that relied heavily on technology he created. In 2013 Dominic abandoned everything he was doing to concentrate on decentralization technology, working through 2014 on theory. DFINITY came out of an earlier project called Pebble that was involved scaling needs.

意外なことに、DFINITYはcypherpunkと地方分権の考え方に遡ることができますが、いくつかの紆余曲折があります。1999年にDominic WilliamsはWei Daiの crypto ++ライブラリを使用していて、彼のbMoneyプロポーザルに出会いました。このアイデアは、彼がDot Com時代の技術に取り組んで消費され、フォローアップする時間がなかったものの、彼を重要なものとして捉えました。彼は分散コンピューティングとスケーラビリティに深く関心を持ち、2010年には彼が作成したテクノロジに大きく依存する3MMのユーザーに成長したMMOゲームを開始しました。2013年にドミニクは地方分権化技術に専念するために彼がやろうとしていたすべてを断念し、2014年まで理論化しました。DFINITYはスケーリングの必要性を伴う以前のPebbleプロジェクトから出てきました。


How did DFINITY start growing?

In 2015 Dominic teamed up with Tom Ding, a crypto entrepreneur, and co-founded a crypto studio, incubator and investor in Palo Alto called String Labs. Due to extensive theoretical groundwork already existing, and pressing demand for the functionality only an AI-governed world compute “decentralized cloud” can easily deliver, String Labs decided DFINITY would be the first protocol it helped incubate to production. The core team was later joined by Timo Hanke, a cryptographer who had previous designed ASIC Boost. Stiftung DFINITY has been formed to take the project to the next stage.

2015年、ドミニクは暗号の起業家であるトム・ディング(Tom Ding)と協力し、パロアルトの暗号研究所、インキュベーター、投資家をString Labsという共同設立者にしました。ストリーミング・ラボは、すでに広範な理論的根拠があり、AI統制された世界のコンピューティング「分散クラウド」だけが容易に提供できる機能が求められているため、DFINITYがプロダクションに抱かれた最初のプロトコルになると判断しました。コアチームは、以前はASIC Boostを設計していた暗号技術者のTimo Hankeが参加しました。Stiftung DFINITYはこのプロジェクトを次の段階に進めるために設立されました。


Is DFINITY linked to academia?

We stay carefully in touch but are only indirectly linked. We are located near Stanford University and Dominic’s designs rely heavily on applying the BLS algorithm to create randomness, which was designed by Dan Boneh and his PhDs. Dominic attends events and occasionally talks on campus. The Decentralized and Distributed Systems (DEDIS) Group at École Polytechnique Fédérale de Lausanne (EPFL) has two members working full time on DFINITY at any given time. Before DFINITY in 2014 the Pebble project included several academics now well known for their interests in crypto including Andrew Miller, Elaine Shi, Steve Omohundro and Ferdinando Ametrano. Although DFINITY uses very different systems, Honey Badger is closely related to an approach to distributed consensus originally used by Pebble. The DFINITY project counts several academics as contributors – and interested parties should contact us to see how they can help.

私たちは慎重に連絡をとっていますが、間接的にのみリンクされています 私たちはスタンフォード大学の近くにあり、ドミニクの設計はダンボネと彼の博士号によって設計されたランダム性を創出するBLSアルゴリズムの適用に大きく依存しています 。ドミニクはイベントに出席し、キャンパスで時々話します。ÉcolePolytechniqueFédéralede Lausanne(EPFL)の分散型分散システム(DEDIS)グループは、いつでもDFINITYでフルタイムで働いている2人のメンバーを抱えています。2014年のDFINITYの前に、Pebbleプロジェクトには、現在、クリプトに関する興味のためによく知られているいくつかの学者が含まれていましたアンドリュー・ミラー、エレイン・シ、 スティーブ・オムオンドロ、フェルディナンド・アメトラーノ。DFINITYは非常に異なるシステムを使用しますが、Honey Badger はPebbleが元々使用していた分散コンセンサスへのアプローチに密接に関連しています。DFINITYプロジェクトでは、いくつかの学者が貢献者として数えられます。利害関係者は、どのように役立つかを知るために私たちに連絡する必要があります。


How can I meet DFINITY people in person?

We can be found in Silicon Valley and all around the world, especially at crypto conferences!!! Feel free to drop us a line hello@dfinity.org



Does DFINITY have release schedule?

Yes. The first three client releases are:

○Copper (in progress)
Using the DFINITY “Threshold Relay Chain” system that drives the network using an incorruptible, unpredictable and unforkable source of endogenously produced randomness as a foundation, Copper will finalize computations at least 50X faster than on Ethereum and maximum throughput is expected to be significantly higher. A fully functioning Blockchain Nervous System will ease future protocol upgrades. The client will interact with the network via a dynamically loaded “protocol library” whose hash is specified by the BNS, allowing the BNS to orchestrate minor protocol changes and optimizations on the fly without interrupting users.

Software on DFINITY private networks can make atomic calls into software on the public/open DFINITY network.

Building on the randomness created by Threshold Relay, Tungsten introduces DFINITY systems that enable the network to scale out with miners: USCIDs, Validation Towers, Validation Trees and micro-shards (mining computers belong to many shards). The main chain will become a “legacy shard”. Software automatically deployed to new faster shards must adopt an asynchronous message-passing model to interoperate with software on other shards seamlessly. Basic legacy limitations that constrain transactions/computations to a single block will be removed, allowing creation of daemons or other useful new forms of autonomous systems.


内在的に生成された不規則性の壊れやすい、予測できない、未知の原因を基盤としてネットワークを駆動するDFINITY「スレッショルド・リレー・チェーン」システムを使用して、銅はEthereumよりも少なくとも50倍高速で計算を完了し、最大スループットが期待されます有意に高くなる。完全に機能するBlockchain Nervous Systemは、今後のプロトコルのアップグレードを容易にします。クライアントは動的にロードされる “プロトコルライブラリ”を介してネットワークと対話します。そのプロトコルライブラリは、BNSによってハッシュが指定されているため、BNSはユーザーに割り込みをかけずに軽度のプロトコル変更と最適化を実行できます。





Why does DFINITY depend on randomness?

So far, the only means we have found to organize a vast number of mining clients in an attack-resistant network that produces a virtual computer is to apply cryptographically produced randomness. Of course, Satoshi also relied on randomness by having miners race to solve a current puzzle whose solutions can only be found randomly using brute force computation — then allowing winners to append blocks of Bitcoin transactions to his chain. DFINITY needs stronger and less manipulable randomness that is produced more efficiently in fixed time. Randomness does not only play an important role to ensure that consensus power and rewards are fairly distributed among all miners. Turing-complete blockchains like DFINITY require a higher level of randomness since smart applications may enable high-volume transactions that hinge on aleatory conditions, so that the potential gain of manipulation could be arbitrarily high.

The solution we found is Threshold Relay, which applies cryptography to create randomness on demand of sufficient network participants in a manner that is almost incorruptible, totally unmanipulable and unpredictable. Using Threshold Relay, DFINITY network participants produce a deterministic Verifiable Random Function (or VRF) that powers network organization and processing.

Expert. It’s worth noting that randomness was playing a key role in distributed computing long before the advent of Satoshi’s blockchain. The most powerful Byzantine Fault Tolerant consensus protocols that can bring participants to agreement without a leader in an asynchronous network, where no assumptions can be made about how long it takes to deliver messages, also depend on a construct called the “common coin” (no pun intended). This produces a series of random coin tosses and is typically implemented using a unique deterministic threshold signature system, with IBM Research applying RSA this way in 2000. During 2014, Dominic worked on a scalable crypto project that involved derivatives of a more recent best-of-breed protocol. This is the origin of his application of BLS signatures to produce random values in decentralized networks, and his subsequent thinking about their numerous powerful applications. Whereas RSA threshold systems depend for setup on a trusted dealer, BLS threshold systems can be easily set up without a trusted dealter.


われわれが見つけた解決策は、しきい値のリレーであり、暗号化を適用して、十分なネットワーク参加者の要求に応じて、ほぼ完全に壊れにくく、完全に操作不能で、予測不可能な方法でランダム性を作成します。Threshold Relayを使用して、DFINITYネットワーク参加者は、ネットワーク構成と処理を強化する確定的なVerifiable Random Function(またはVRF)を生成します。

エキスパート Satoshiのブロックチェーンが登場するずっと前から、乱雑さが分散コンピューティングで重要な役割を果たしていたことは注目に値する。メッセージを配信するのにどれくらいの時間がかかるかについての仮定がない非同期ネットワークのリーダーなしで参加者に同意させることができる最も強力なビザンチンフォールトトレラントコンセンサスプロトコルは、「共通コイン」と呼ばれる構成にも依存します言い訳は意図している)。これは、一連のランダムなコイントーシスを生成し、典型的には、独自の決定論的閾値署名システムを使用して実装され、IBM Researchが RSA2000年にはこのようになりました.2014年にドミニクは最近のベスト・オブ・ブリード・プロトコルの派生物を含むスケーラブルな暗号プロジェクトに取り組みました 。これは、分散型ネットワークでランダムな値を生成するためのBLSシグネチャの適用の原点であり、その後の多数の強力なアプリケーションについての彼のその後の考え方です。RSAスレッシュホールドシステムは信頼できる販売店のセットアップに依存しますが、BLSスレッシュホールドシステムはトラステッドディーラーなしで簡単に設定できます。


Is a source of randomness useful for cloud apps?

Yes, a source of randomness can be essential within an open cloud platform. For example, beyond trivial examples of applications in fair lottery and games systems, randomness can be used to randomize the order of transactions submitted to a financial exchange to make “front running” by miners harder. But perhaps the most powerful applications are within autonomous systems. A great example is provided by the PHI decentralized commercial banking system, which is currently being developed on by the String Labs team. PHI is fully autonomous but is able to judiciously give out loans algorithmically using human validators as proxies, who are randomly selected one after another to prevent collusion. Arguably, most autonomous systems that need to make decisions on external data they cannot self-validate necessarily depend upon random selection to validate propositions about the outside world and resist attack.

はい、オープンソースのクラウドプラットフォームでは、ランダム性の源泉が不可欠です。例えば、公正な宝くじやゲームシステムでのアプリケーションの例を超えて、鉱夫による「フロントランニング」を難しくするために、乱数を使用して金融取引に提出される取引の順序をランダム化することができます。しかし、おそらく最も強力なアプリケーションは自律システム内にあります。大きな実例は、PHIの分散型商業銀行システムこれは現在String Labsチームによって開発中です。PHIは完全に自律的であるが、共謀を防ぐために人のバリデーターを次々にランダムに選択された代理人としてアルゴリズム的に貸し出すことができます。おそらく、外部データに関する決定を行う必要があるほとんどの自律システムは、必然的に自己選択できないため、外界に関する命題を検証し、攻撃に抵抗する必要があります。


How does DFINITY produce randomness?

DFINITY has introduced a novel mechanism called “Threshold Relay”. This produces a deterministic source of randomness that is almost incorruptible, totally unmanipulable and unpredictable.



Hey, why not just use the block hash as a random value?

Many of the protocols DFINITY uses to scale out depend on randomness being unmanipualable and unpredictable. In a Proof-of-Work system there is an expense associated with creating several candidate blocks so as to “select” the hash but it can be done. In a Proof-of-Stake system where no brute force computation is involved, a miner can easily modify the content of a block to determine its hash, making a block hash completely useless. However, a block hash does not suffice for our purposes in either case.

DFINITYがスケールアウトするために使用するプロトコルの多くは、不規則性が操作不能で予測できないことに依存しています。実証システムでは、ハッシュを「選択」するためにいくつかの候補ブロックを作成することに関連する費用がありますが、それは実行できます。brute force計算が含まれていない証拠システムでは、マイナーはブロックの内容を簡単に変更してハッシュを判断し、ブロックハッシュを完全に無用にすることができます。しかし、どちらの場合でもブロックハッシュでは十分ではありません。


Hey, why not just use a commit-reveal scheme for randomness?

These schemes generally involve a “last revealer” who can choose not to play ball and therefore influence the result if the others proceed anyway. Levying fines on those who withhold their commitments doesn’t really work since the rewards gained by manipulating the randomness might be far higher (after all, any number of apps might be depending upon the randomness produced by the cloud). Apart from the flawed security, such schemes are also necessarily slow and prone to failure because they depend on all participants supplying their commitments before they can proceed.



Why not use a TPM chip for randomness?

It would be impossible to decide who should run the hardware and it might be turned off!



Why not use the notarization of a block for randomness?

That would require everyone to agree on a single winning block first, which means running a full consensus protocol, for which there is not enough time. Furthermore, the process of agreement would introduce a surface for bias (manipulation) because it would open up a choice between two or more messages. As a matter of defense in depth we want to avoid that. If the agreement and signing phase are clearly separated and the signing is considered unpredictable then there can’t be bias.



How is a DFINITY network composed?

A DFINITY network is composed from mining clients – often referred to as “processes” – that are connected in a peer-to-peer broadcast network. Each client must have a “mining identity” it uses to sign its communication messages and participate, which is recorded in the globally maintained network state. In the public/open DFINITY network, a mining identity is created by making a security deposit paid in a quantity of dfinities set by the decentralized Blockchain Nervous System governance mechanism, whereas in a private DFINITY network valid mining identities are defined by a trusted dealer such as a corporate systems administrator. Each client is expected to make available some standard quantity of computational resource – data processing capacity, network bandwidth and storage – to which they are held using mechanisms such as USCIDs explained in a later FAQ. As the network grows, the broadcast network is sharded into many sub-networks to prevent communications bottlenecks forming.

Expert. Connections between processes are organized in a Kademlia-like structure using derivatives of their public identities proven as genuine using zero knowledge proofs. Each process maintains connections to some number of other processes and each consequently has a very high chance of having their message broadcasts propagate throughout the network by gossip and receiving messages broadcast by other processes. The properties of such broadcast mechanisms are essential to the operation of decentralized network generally. An adversary can try to subvert this using an “eclipse attack”, which involves surrounding a correct process with faulty processes that then filter which messages it can send and receive. In the Tungsten release of DFINITY we plan to make such attacks much harder by constraining the peers to which processes can connect using our endogenous random beacon and cryptographic operations derived from the identities themselves. The network will be forced to continually reorganize into constrained random forms, making it almost impossible for an adaptive adversary to perform attacks on targeted sectors.

DFINITYネットワークは、ピアツーピアブロードキャストネットワークに接続されたマイニングクライアント(「プロセス」と呼ばれることも多い)から構成されます。各クライアントは、通信メッセージに署名して参加するために使用する「マイニングID」を持っていなければなりません。グローバルに維持されているネットワーク状態で記録されます。パブリック/オープンDFINITYネットワークでは、分散型ブロックチェーン神経システムのガバナンスメカニズムによって設定された一定量のデポジットを支払って鉱業識別情報を作成しますが、プライベートDFINITYネットワークでは、有効なマイニングIDは、企業システム管理者として 各クライアントは、後述のFAQで説明されているUSCIDなどのメカニズムを使用して、データ処理能力、ネットワーク帯域幅、ストレージなどの標準的な量の計算リソースを利用できるようになる予定です。ネットワークが成長するにつれて、ブロードキャストネットワークは、多数のサブネットワークに分割され、通信ボトルネックが形成されないようにする。

エキスパート プロセス間の接続は、 Kademlia公的アイデンティティーの派生物を使用して、ゼロ知識校正を用いて真正であると証明された類似の構造。各プロセスは、いくつかの他のプロセスとの接続を維持し、結果として、ゴシップ(gossip)および他のプロセスによってブロードキャストされたメッセージを受信することによって、メッセージブロードキャストをネットワーク全体に伝播させる可能性が非常に高くなります。このようなブロードキャストメカニズムの特性は、一般に分散型ネットワークの運用に不可欠です。敵は、 “eclipse attack”を使ってこれを破壊しようとすることができます。このプロセスでは、正しいプロセスを誤ったプロセスに囲み、送受信できるメッセージをフィルタリングします。タングステンのDFINITYリリースでは、内在するランダムビーコンとアイデンティティそのものから導かれた暗号操作を使用して、プロセスが接続できるピアを制約することで、このような攻撃をさらに困難にする予定です。ネットワークは制約のあるランダムな形に継続的に再編成され、適応型敵対者が標的とされたセクターに対して攻撃を実行することはほとんど不可能になります。


How does Threshold Relay work?

Note: also see the technical papers and introductory decks.


A network of clients is organized as described in the foregoing FAQ. Threshold Relay produces an endogenous random beacon, and each new value defines random group(s) of clients that may independently try and form into a “threshold group”. The composition of each group is entirely random such that they can intersect and clients can be presented in multiple groups. In DFINITY, each group is comprised of 400 members. When a group is defined, the members attempt to set up a BLS threshold signature system using a distributed key generation protocol. If they are successful within some fixed number of blocks, they then register the public key (“identity”) created for their group on the global blockchain using a special transaction, such that it will become part of the set of active groups in a following mining “epoch”. The network begins at “genesis” with some number of predefined groups, one of which is nominated to create a signature on some default value. Such signatures are random values – if they were not then the group’s signatures on messages would be predictable and the threshold signature system insecure – and each random value produced thus is used to select a random successor group. This next group then signs the previous random value to produce a new random value and select another group, relaying between groups ad infinitum and producing a sequence of random values.

In a cryptographic threshold signature system a group can produce a signature on a message upon the cooperation of some minimum threshold of its members, which is set to 51% in the DFINITY network. To produce the threshold signature, group members sign the message individually (here the preceding group’s threshold signature) creating individual “signature shares” that are then broadcast to other group members. The group threshold signature can be constructed upon combination of a sufficient threshold of signature shares. So for example, if the group size is 400, if the threshold is set at 201 any client that collects that many shares will be able to construct the group’s signature on the message. Each signature share can be validated by other group members, and the single group threshold signature produced by combining them can be validated by any client using the group’s public key. The magic of the BLS scheme is that it is “unique and deterministic” meaning that from whatever subset of group members the required number of signature shares are collected, the single threshold signature created is always the same and only a single correct value is possible.

Consequently, the sequence of random values produced is entirely deterministic and unmanipulable, and signatures generated by relaying between groups produces a Verifiable Random Function, or VRF. Although the sequence of random values is pre-determined given some set of participating groups, each new random value can only be produced upon the minimal agreement of a threshold of the current group. Conversely, in order for relaying to stall because a random number was not produced, the number of correct processes must be below the threshold. Thresholds are configured so that this is extremely unlikely. For example, if the group size is set to 400, and the threshold is 201, 200 or more of the processes must become faulty to prevent production. If there are 10,000 processes in the network, of which 3,000 are faulty, the probability this will occur is less than 10e-17 (you can verify this and experiment with group sizes and fault threats using a hypergeometric probability calculator). This is due to the law of large numbers – even though individual actors might be unpredictable, the greater their number the more predictably they behave in aggregate.

As well as being incredibly robust, such systems are also highly efficient. In a broadcast gossip network, a group of 400 can produce its threshold signature by relaying only about 20KB of communications data. Meanwhile the BLS threshold cryptography libraries DFINITY was involved in creating can perform the computation for the necessary operations in fractions of a millisecond on modern hardware.



クライアントのネットワークは前述のFAQで説明したように構成されています。閾値リレーは、内因性ランダムビーコンを生成し、それぞれの新しい値は、独立してできるクライアントのランダム基(複数可)を定義してみてくださいと「閾値基」にフォーム。各グループの構成は完全にランダムであり、交差することができ、クライアントは複数のグループで表示できます。DFINITYでは、各グループは400人のメンバーで構成されています。グループが定義されると、メンバーはBLSを設定しようとします 分散鍵生成プロトコルを使用する閾値署名システム。固定数のブロック内で成功した場合は、特別なトランザクションを使用して、そのグループ用に作成された公開鍵(「ID」)をグローバルブロックチェーンに登録します。これにより、次のグループ内のアクティブグループの一部になりますマイニング “時代”。ネットワークはいくつかの予め定義されたグループを持つ「起源」で始まり、そのうちの1つはデフォルト値で署名を作成するために指名される。そのような署名はランダムな値です – もしそうでなければ、メッセージ上の署名は予測可能であり、閾値署名システムは安全ではなく、生成された各ランダム値はランダムな後継グループを選択するために使用されます。次に、この次のグループは、前のランダム値に符号を付けて新しいランダム値を生成し、別のグループを選択し、無限のグループ間で中継し、ランダム値のシーケンスを生成する。


したがって、生成されたランダム値のシーケンスは完全に決定論的であり、操作不能であり、グループ間の中継によって生成されるシグネチャは、Verifiable Random FunctionまたはVRFを生成する。ランダム値のシーケンスは、参加グループのいくつかのセットが与えられて予め決定されるが、新しい各ランダム値は、現在のグループの閾値の最小限の合意の下でのみ生成され得る。逆に、乱数が生成されなかったためにストールするためには、正しい数プロセスはしきい値以下でなければなりません。しきい値は、これが非常に起こりにくいように設定されています。たとえば、グループサイズが400に設定されていて、しきい値が201である場合、生産を妨げるために、プロセスの200以上に障害が発生する必要があります。ネットワークに10,000プロセスがあり、そのうちの3,000が故障している場合、これが起こる確率は10e-17未満です(これを確認し、グループサイズと障害の脅威を超幾何確率計算機で実験することができ ます)。これは大規模な法則に起因する – たとえ個々のアクタが予測不能であっても、その数が多いほど予測可能に動作します。



How does a Threshold Relay blockchain work?

Note: also see the technical papers and introductory decks.


A Threshold Relay “blockchain” is created by taking Threshold Relay, using the randomness to define a priority list of “forgers” at each block height, then having the current group also “notarize” the blocks produced. So for example, at block height h the random number produced at block height h-1 would randomly order all mining client processes in the network into a priority list, with the first process being in slot 0, the second in slot 1 and so on. When the members of group h first receive the preceding signature that selected their group, they set their stop watches running (these will be slightly out of sync, naturally, since they will receive the preceding group signature at different times). They then wait for the network’s current block time to expire before they begin processing blocks produced by the priority list of mining processes.

For optimization purposes, slot 0 is allowed to produce a block immediately after the block time expires, and successive slots can produce blocks after additional small increments in time. The slots themselves are weighted, with blocks from slot 0 having a score of 1, from slot 1 having a score of 0.5, and so on (the same rewards are also provided to forgers if their block is included in the chain). The members of the current group produce signature shares on blocks they receive according to the following rules: (i) they have not previously signed a block representing a higher scoring chain (ii) the block references a block signed by the previous group (iii) the block is valid with respect to its content and their local clock, and (iv) they have not seen their group’s signature on a valid block.

Group members thus continue creating signature shares on blocks until their group has successfully signed a block and they have received the signature, whereupon they sign the previous randomness and relay to the next group (and stop signing blocks that they see). Of course, in practice the highest priority block from slot 0 will normally be waiting in member’s network input queues for processing upon expiry of the block time, and this will be signed and no others. The scoring of blocks from different slots exists to help forgers and groups to choose between candidate chains, but it is the group notarization that accelerates and cements convergence since new blocks can only build on blocks the previous group has signed.

The block notarization process provides enormous advantages. Whereas in traditional Proof-of-Work and Proof-of-Stake blockchains it is always possible to go back in time and create a new branch of the chain, in Threshold Relay chains only blocks that have been broadcast at the correct time and notarized by the then correct group can be included in valid chains. This addresses key attacks and vulnerabilities such as “selfish mining” and “nothing at stake” that greatly increase the number of confirmations required before a block’s inclusion in the chain is fully secure or “finalized”. By contrast, Threshold Relay chains build consistency at a furious rate – normally there will only be a single candidate chain whose head is in slot 0, and once this has been signed it can be trusted for most purposes. Finality is usually provided in seconds.

The advantages of Threshold Relay blockchains are overwhelming. They don’t depend upon expensive Proof-of-Work processes. If required, a network can run multiple chains in parallel without undermining their security properties. They finalize transactions far faster than any other system making it possible to create superior user experiences. And, because a fixed block time is allotted to forgers, far more transactions can be included (by contrast in Proof-of-Work systems, the faster a miner can broadcast a new block the greater the chance another will build upon it, encouraging him to build on empty blocks that he does not have to validate and thus also encouraging him to make his block empty – which is why 50% of Ethereum’s blocks are currently empty). Services such as SPV can also be provided to clients if they have a Merkle root hash of the current set of groups in the network. Meanwhile, security is more predictable, since viable chains must always be notarized and visible.



Threshold Relay “ブロックチェイン”は、各ブロックの高さで “偽造者”の優先順位リストを定義するランダム性を使用してしきい値リレーを取得し、現在のグループに生成されたブロックを「公証」させることによって作成されます。例えば、ブロック高さhでは、ブロック高さh-1で生成された乱数は、ネットワーク内のすべてのマイニングクライアントプロセスをランダムに優先順位リストに順序付けします。最初のプロセスはスロット0、スロット2はスロット1 。グループhのメンバー 最初にそのグループを選択した前の署名を受け取り、ストップウォッチを実行するように設定します(これは、異なる時刻に前のグループ署名を受け取るため、同期が少しずれてしまいます)。次に、ネットワークの現在のブロック時間が満了するのを待ってから、マイニングプロセスの優先順位リストによって生成されたブロックの処理を開始します。






How does DFINITY scale-out ?

In of itself Threshold Relay blockchains cannot “scale-out”, although their performance properties certainly provide “scale-up” gains when compared with existing systems. DFINITY however applies their properties in a three-level scale-out architecture that addresses in order consensus, validation and storage. The consensus layer involves a Threshold Relay chain that creates a random heartbeat that drives a Validation Tree of Validation Towers in the validation layer, which does for validation what a Merkle tree does for data and provides almost infinitely scalable global validation. The random beacon also defines the organization of mining clients into storage (state) shards in the storage layer, which use their own Threshold Relay chains to quickly reach consensus on received transactions and resulting state transitions that are passed up to the validation layer. The top-level Threshold Relay consensus blockchain then records state roots provided by the Validation Tree that anchor all the storage in the network.

You will notice no mention of blocks of transactions is made, and this is because there are none. A DFINITY cloud is intended to store exabytes of state and process millions or billions of transactions a second. No process would be able to view more than an almost infinitesimally small fraction anyway. What the network does instead is focus on ensuring that recorded state – as defined by its root hash – only progresses through valid transitions upon application of valid transactions. Thereafter, the correct provenance of any data, the execution of any transaction, or the performance of required actions by the mining clients themselves, can be proven using Merkle paths to the current global root.

History tip. This architecture and supporting protocols were devised by Dominic Williams in early 2015 and were briefly introduced along with other technical innovations at a Bitcoin Devs meetup in San Francisco and during an “Introduction to Consensus” talk given at Devcon1 in London (if you blinked you’d have missed it!!!).

In自体のスレッショルドリレーブロックチェーンは、既存のシステムと比較して性能特性が確実に「スケールアップ」ゲインを提供しますが、「スケールアウト」できません。ただし、DFINITYは、コンセンサス、検証、および保存の順番に取り組む3レベルのスケールアウトアーキテクチャで、そのプロパティを適用します。コンセンサス層には、Merkleツリーがデータに対して行うことを検証し、ほぼ無制限にスケーラブルなグローバル検証を提供する、検証レイヤー内の検証タワーの検証ツリーを駆動するランダムなハートビートを作成するThreshold Relayチェーンが含まれます。ランダムビーコンは、鉱業クライアントの組織をストレージレイヤー内のストレージ(状態)シャードに定義します。これは、独自のスレッシュホールドリレーチェーンを使用して、受信したトランザクションとその結果の状態遷移を検証レイヤーに渡します。トップレベルのしきい値リレーコンセンサスブロックチェーンは、ネットワーク内のすべてのストレージをアンカーする検証ツリーによって提供される状態ルートを記録します。


歴史のヒント。このアーキテクチャーとサポートプロトコルは、Dominic Williamsによって2015年初頭に考案され、サンフランシスコのBitcoin Devsミートアップや、ロンドンのDevcon1で紹介された「コンセンサスの紹介」の講演で、他の技術革新とともに簡単に紹介されました(もし、 dはそれを逃した!!!)。


What does a Validation Tower do and how does it work?

Note: also see the technical papers and introductory decks.


In a traditional blockchain system such as Bitcoin or Ethereum, the blocks record every transaction from genesis forwards and each member of the network participates in checking that the contents of blocks and recorded updates to state are valid. This would never work in a system such as DFINITY because the volumes of data and transactions are too large for any individual process to process – potentially involving exabytes of state and billions of transactions each second. Therefore, DFINITY needs a way to securely validate updates to shards of its state using relatively small subsets of the processes in its network. It will then store just a single Merkle root anchoring the global state in its top-level chain (making it, ironically, far lighter weight than that of Bitcoin or Ethereum).

To construct the Merkle root we will need a Validation Tree, which helps a decentralized network validate unlimited things in a similar way that a Merkle Tree makes it possible to notarize the existence of unlimited data using the single root hash. At each node of this tree will be a Validation Tower that validates assigned inputs and produces output digests attesting to their processing. At the lowest level in the tree, towers will receive transactions and proposed consequential transformations occurring to assigned shards of state data. The purpose of a tower is to validate things using a relatively small subset of processes in the network but do so with similar security as if all the processes in the network had been used – as occurs, in theory at least, in Bitcoin and Ethereum. On first sight this sounds impossible, but luckily it is not.

A validation tower proceeds through an infinite sequence of levels, with each new level introducing an attestation that some transformation is valid. For example, a validation tower might be assigned to validate updates made to some shard of storage by transactions submitted to the network. Each level of a tower is constructed by a new group of processes that has been selected by the random beacon produced by the top-level Threshold Relay chain. When a group builds a new level of the tower, it attests that some new transformations are valid and that the transformations represented by some number of lower levels to a depth d are valid too.

Transformations are considered “validated” once the level first attesting to them has been buried to depth d, which indicates that the d-1 levels above also attest to them. Therefore, in normal circumstances, whenever a group of processes builds a new level, they also transition the transformations in the level previously buried to depth d-1 and now buried to depth d into a fully validated and irreversible state. The natural question is why an attacker can’t somehow get an invalid transformation buried to depth d. This might take a lot of attempts since he will have to control the groups building d consecutive levels, but if getting his faulty transformation validated wins him a trillion dollars, it will be worth it!

To understand why this is not possible, first consider that once a process takes part in signing a new tower level, it enters a dormant state. Here it will remain until such time as the level it participated in building has been buried sufficiently deeply – which in normal circumstances will be to depth d – after which time it will be reactivated (in practice processes will need to prove that their last tower level has been validated when participating in network roles such as block forging, and this is how they are excluded). The challenge for the adversary therefore, is to overcome the expense involved with having his processes frozen, since they must be joined to the network with a substantial security deposit in participation tokens or other expensive operation. Once a process has committed to a faulty level, he can only free it when he finds other processes that will build on top and bury get it buried d deep.

The adversary may hope to play the odds and trade the expense against some massive gain. His first problem is that he does not know which processes will be called upon by the random beacon to build the next level, nor the levels after that, since the randomness is unmanipulable and unpredictable. If correct process(es) are selected to build the next level they will reject the adversary’s level since it is invalid. The adversary might therefore hope to withhold his faulty level from the next processes assigned by the randomness until such time as the next processes are also controlled by him and they will validate his faulty level. However, the Validation Tower protocol prevents this happening: whenever a new level is not built in lock step with the random beacon, each unvalidated level beneath it resets to needing a further d-1 levels built on top before they become valid.

Therefore, the adversary needs to commit his faulty processes to building a faulty level (that, say, validates an invalid state transition awarding him a trillion dollars) and then hope that by complete chance he will control the following d-1 consecutive groups that the random heartbeat will select to build additional levels on the tower. But it should be clear that the math will not stack up well for him (no pun intended). For example, if each level must be built by 10 processes, 6 of whom must cooperate to create a new level, and the network has 10,000 processes of which 3,000 are faulty (and, in an already highly unlikely event, controlled by this single adversary), the chance that he controls a single level building group is 0.047. If he commits to a bad level, he will need – by luck – a further 9 levels built by groups selected by him to succeed in having is fraud made valid. This will only occur with a probability of 0.047^9=0.000000000001119, or to put it another way, once every 893,550,862,955 tries!

You may now see the problem the adversary has. Each time he attempts to commit the fraud by committing to bad levels and hoping his processes will be selected to build d-1 following levels, with overwhelming probability he will have the processes he uses decommissioned (meanwhile, correct processes below will not be decommissioned since they can build out from under him). In this example, each level he builds that is never validated by the tower will cost him 6 processes. If the participation token security deposit associated with a process is valued at $10,000, then losing the 6 processes used to create a bad level will cost him $5.36e16. Clearly, in the real world he will run out of resources long before he manages to fraudulently award himself a trillion dollars of crypto!

What is fantastic about Validation Towers is that they enable the network to apply a small subset of processes to validate state transitions with complete security. All that is required for their operation is an incorruptible, unmanipulable and unpredictable source of randomness: supplied courtesy of Threshold Relay chains.






変換は、最初にそれらを証明するレベルが深さdに埋め込まれると、「検証された」とみなされます。これは、上のd-1レベルもそれらを証明していることを示します。したがって、通常の状況では、プロセスのグループが新しいレベルを構築するたびに、以前に深さd-1に埋め込まれていたレベルの変換を、深度dに埋もれて完全に検証された不可逆状態に遷移させます。自然な問題は、攻撃者が何らかの形で深さdに埋もれた無効な変換を得ることができない理由です。これは、d連続レベルを構築するグループを制御しなければならないので、多くの試みが必要になるかもしれませんが、 検証された変則的な変換が1兆ドルを達成すれば、それは価値があります!


敵対者は、オッズを払い、多額の利益に対抗して費用を交渉することを望むかもしれません。彼の最初の問題は、ランダム性が操作不能で予測不可能であるため、次のレベルを構築するためのランダムビーコンとそれ以降のプロセスがどのプロセスに呼び出されるかを知らないことです。次のレベルを構築するために正しいプロセスが選択された場合、それは無効であるため、敵のレベルを拒否します。したがって、敵対者は、次のプロセスも彼によって制御されるまでランダム性によって割り当てられた次のプロセスから自分の欠陥レベルを差し控えることを望み、欠陥レベルを検証する。ただし、Validation Towerプロトコルはこのようなことを防止します。d-1レベルが有効になる前に構築されています。

したがって、敵対者は、間違ったレベルを構築するために彼の失敗したプロセスをコミットする必要があります(例えば、無効な状態遷移が彼に1兆ドルを与えていることを検証します)。そして、次のd-1ランダムなハートビートがタワーに追加レベルを構築するために選択する連続したグループ。しかし、数学が彼のためにうまく積み重ねられないことは明らかであるはずです。たとえば、各レベルを10のプロセスで構築しなければならない場合、6人が協力して新しいレベルを作成しなければならず、ネットワークには10,000のプロセスがあり、そのうちの3,000個に障害があります(そして、この単一の敵)、彼が単一レベルのビルディンググループをコントロールする機会は0.047です。彼が悪いレベルに犯した場合、幸運にも、彼が選択したグループによって構築されたさらに9レベルは、詐欺が有効になったことで成功する必要があります。これは0.047 ^ 9 = 0の確率でのみ発生します。

あなたは今、敵が持つ問題を見ることができます。彼は悪いレベルに託すことによって詐欺行為を試み、彼のプロセスがd-1を構築するために選択されることを望むたびに圧倒的な可能性で彼は廃止されたプロセスを持つことになります(下のプロセスは、彼の下に構築することができるため、廃止されることはありません)。この例では、タワーによって決して検証されない彼が構築する各レベルは彼に6つのプロセスを要します。プロセスに関連する参加トークンの保証金が$ 10,000である場合、悪いレベルを作成するために使用された6つのプロセスを失うと、$ 5.36e16の費用がかかります。明らかに、現実世界では、詐欺的に1兆ドルの暗号を授与しようとする前に、彼は長い間資源を使い果たしてしまうでしょう!

Validation Towersの素晴らしい点は、完全なセキュリティで状態遷移を検証するために、ネットワークがプロセスの小さなサブセットを適用できることです。その操作に必要なのは、壊れにくく、操作不能で、予測できないランダム性の原因です.Threshold Relayチェーンが提供します。


How does a Validation Tree work?

Note: also see the technical papers and introductory decks.


The purpose of a Validation Tree is to produce a Merkle tree over the current state data stored by the virtual computer and key events that network processes (mining clients and full nodes) must prove have occurred. The power of a Merkle tree is that it produces a single “root hash” digest that, while being as small as 20 bytes, can act as a signature for a virtually unlimited input data set. The input data is arranged in some suitable well defined fashion and becomes the leaves of the tree, and the hashes of the leaves are then themselves combined pair or n-ary-wise hierarchically in a tree until a single root hash is produced. Thereafter, the existence of a data leaf can be proven by providing a “Merkle path” up through the tree to the root, which comprises every higher hash whose value is partly dependent on its data.

By producing a Merkle root, the Validation Tree can anchor virtually unlimited quantities of data the network either stores as state or needs to hold for purposes of its internal functioning. As a distributed data structure, it involves an arrangement of Validation Towers that act as the nodes of the tree. These are driven by the heartbeat of a random beacon such as a Threshold Relay system, and validate inputs producing “fully validated” hashes as outputs. At the bottom level of the tree, Validation Towers sit directly above the state leaves, which will typically be shards of state managed by subsets of network processes. These pass up state transitions (updates to state created by computation performed by transactions) to their assigned Validation Towers, and the towers eventually produce hashes describing their validated new state. The power of the system is that a Validation Tower can also validate and combine the outputs of other towers to produce a Merkle tree.

For example, a bottom level Validation Tower will produce attestations to a current root hash anchoring some range (“shard”) of state. In a Merkle tree, its parent node would combine this hash with those of its sibling(s) recursively up through the hierarchy until the root hash is produced. The challenge in an enormous decentralized network such as DFINITY, is that there may be too many leaf hashes for individual processes to combine into a Merkle tree. We might hope to simply assign subsets of processes to construct different parts of the Merkle tree and have the protocol assemble it from components, but in this case we would have no way of knowing that the components, and thus the overall tree, were correct. The solution is for Validation Towers to be used to combine the leaf hashes, upwards in a tree, until the root hash is produced. Thus higher towers receive and combine the hashes of their respective child nodes, then producing a new fully validated hash that is passed to their parents, recursively until the root is produced.

Thus, there is some root Validation Tower that produces valid root hashes, and it is from this that fully validated root hashes are taken and recorded in a network’s top level record of consensus (such as a Threshold Relay blockchain). Each individual tower operates independently and can proceed at a different rate, which prevents the progression of the network being dependent upon some subset of the processing it is performing. The most recent root hash recorded by consensus then anchors the global state stored in any number of shards, and is also used to anchor critical events that have occurred as though they too are simply data. An individual process that was required to have participated in producing a level of some validation tower can thus prove performance of the action in communications with other processes by supplying a Merkle path to some root recorded in the consensus record. In this way we can anchor exabytes of data, and restrict participation in the network to processes whose behavior is correct.

Of course, a considerable journey is involved between an update to state being applied and the transformed state being anchored by a root hash recorded by the master consensus layer (since, the combination of hashes must proceed upwards through towers in the hierarchy). This is unavoidable, since the master record can never be incorrect, but it does not have to reduce the speed with which all computations are finalized. If a shard is maintained by a sufficiently large set of processes, many clients of the network will accept a transaction to be finalized the moment the shard advertises it as decided. Meanwhile, finality is certainly achieved the moment the lowest tower has validated the transaction, even if it will take a while before the master consensus notarizes it. In the applications envisaged for decentralized cloud systems, the additional computational expense is also of no consequence: they provide enormous reductions in the costs associated with running cloud services through the properties of autonomy, unstoppability and tamperproofing, among others, which dramatically lower requirements for supporting human capital.



検証ツリーの目的は、Merkleツリーを生成することです 仮想計算機によって記憶された現在の状態データと、ネットワークプロセス(マイニングクライアントおよび完全ノード)が発生したことが証明されなければならないキーイベントとを比較する。Merkleツリーのパワーは、20バイトという小さなものですが、事実上無制限の入力データセットのシグネチャとして機能できる単一の「ルートハッシュ」ダイジェストを生成することです。入力データは、いくつかの適切に定義された方法で整理され、ツリーの葉になり、その後、単一のルートハッシュが生成されるまで、葉のハッシュ自体がツリー内で階層的にペアまたはn-wwise結合されます。その後、データリーフの存在は、「Merkleパス」を提供することによって証明することができ、

検証ツリーは、Merkleルートを生成することにより、ネットワークが状態として保存するか、または内部機能の目的で保持する必要があるデータの事実上無制限の量を固定することができます。分散データ構造としては、ツリーのノードとして機能するバリデーションタワーの配列が必要です。これらは、閾値リレーシステムなどのランダムビーコンのハートビートによって駆動され、出力として「完全に検証された」ハッシュを生成する入力を検証します。ツリーの最下位レベルでは、バリデーションタワーが州の葉のすぐ上にあります。これは通常、ネットワークプロセスのサブセットによって管理される州の破片です。これらは、状態遷移(トランザクションによって実行された計算によって生成された状態への更新)をそれらの割り当てられた検証タワーに渡し、最終的に検証された新しい状態を記述するハッシュを生成する。システムのパワーは、Validation Towerが他のタワーの出力を検証して結合してMerkleツリーを生成できることです。




ユニークステートコピーID USCIDとは何ですか?どのように機能しますか?

What is a Unique State Copy ID USCID and how does it work?

Note: also see the technical papers and introductory decks.


A key purpose of the decentralized cloud is to provide a compute platform where unstoppable applications can be built and run. This depends upon its capacity to securely store state in the protocol buffers of clients. In Bitcoin and Ethereum, there is a single blockchain recording transactions, and the Ethereum state database is replicated across all clients. Networks such as DFINITY are designed to store unlimited quantities of state as needed, and therefore it is not possible for clients to maintain copies of everything held – it might, after all, be many exabytes or more. Therefore it is necessary to partition (shard) the storage of state across clients, naturally raising questions about what factor of replication is needed to provide the necessary level of security. In turn, this depends upon the answer to another crucial question: how can we know that the data really is replicated.

The challenge that must be addressed is that although numerous clients might appear to hold replicas of data, the impression might only be a chimera constructed by an adversary for the purposes of earning mining rewards without doing any work. The problem is well illustrated by systems such as IPFS and its associated incentive system, FileCoin. IPFS is a decentralized file store, where files and other objects are addressed by the hash of their data. The problem is that when a user uploads their file, it is not clear how many times it is replicated, nor whether those clients currently storing – or caching – their file will continue to do so. FileCoin aims to solve this by paying participation token rewards to clients that can show that they hold copies of data, creating the necessary incentive for its widespread maintenance. The system involves challenges being made that clients can satisfy using copies of the data they hold. However the unaddressed problem is that the protocol cannot be sure whether the clients are in fact just proxies for some giant mainframe where all the data is stored without any replication at all!

To provide realistic guarantees about the safety of data, networks such as DFINITY need to be much more sure about the underlying replication factor involved. This will also enable them to ensure that replication is not too high – after all, it would be ridiculous to replicate a file across 1M mining clients. The solution is provided by USCIDs, which require clients to maintain copies of the state data assigned to them in a unique form – hence the acronym “Unique State Copy ID”. These work by requiring each client to store all data encrypted using a key derived from their identity, about which all other clients are necessarily aware. A specially tuned symmetric encryption algorithm is used that makes encryption relatively slow and decryption extremely fast. It is designed so that while it is possible to encrypt data when it is updated and written, it would not be practical to encrypt all the assigned state in reasonable time.

The USCID system requires that clients make attestations to their uniquely encrypted state during protocol communications. For example, when a client produces a candidate block in a Threshold Relay chain PSP, this must contain such an attestation. In order for the block to have a chance of being included in the chain and a reward returned, it must be broadcast within a limited time window of a few seconds, and here the cheating client has a problem. The attestation is the output of a hash chain produced by a random walk over their uniquely encrypted state – starting at some random block dictated by the random beacon present in Threshold Relay networks, the block is added to a hash digest that then selects another random block, and on, until the data of all the blocks in a random chain of some required length have all been added to the digest. Since hashing is fast, producing the attestation correctly will be easy so long as the data is encrypted using the derived key as required. However, if it is held in plaintext, for example on the imagined central mainframe, blocks will have to be encrypted on the fly before being fed into the digest. Because of the properties of the specially designed symmetric encryption algorithm used, production of the attestation will take too long for it to be useful.

During normal communications a client will continually produce such attestations, which will rarely ever be validated. However, when the random beacon randomly requires validation, or when a reward is being earned during block origination, validation can be performed by other clients that hold replicas of the same data. An individual client with the same data can validate the attestation for itself by starting at the same block, decrypting the data to plaintext and then re-encrypting it using the attestor’s derived key, and on, until the same output hash should have been created whereupon it can be compared. This will necessarily take a while due to the properties of the chosen encryption scheme but no matter as it can be performed indpendently of the short term progression of the network. Of course, clients must anyway maintain earlier versions of the state using a special database in case of a chain reorganization, so walking the version of a copy from some earlier moment in time does not present a challenge. A structure similar to a Validation Tower is used to decide definitively whether an attestation is valid. If it is not, the attestor’s security deposit will be lost and the job of holding replicas will be assigned to another client.



分散クラウドの主な目的は、止められないアプリケーションを構築して実行できるコンピューティングプラットフォームを提供することです。これは、クライアントのプロトコルバッファに安全に状態を格納する能力に依存します。 BitcoinとEthereumには、トランザクションを記録する単一のブロックチェーンがあり、Ethereum状態データベースはすべてのクライアントに複製されます。 DFINITYなどのネットワークは、必要に応じて無制限の状態量を格納できるように設計されているため、クライアントが保有するすべてのコピーを保持することはできません。結局、エクサバイト以上になる可能性があります。したがって、必要なレベルのセキュリティを提供するために必要なレプリケーションの要因について自然に疑問を投げかけ、クライアント間で状態のストレージを分割(シャード)する必要があります。これは、データが実際に複製されていることをどのようにして知ることができるかという、別の重大な疑問に対する答えに依存します。

取り組まなければならない課題は、多数のクライアントがデータの複製を保持しているように見えるかもしれないが、印象は仕事をしないで鉱業報酬を得る目的で敵によって建設されたキメラだけであるかもしれない。この問題は、IPFSや関連するインセンティブシステムFileCoinなどのシステムによってよく説明されています。 IPFSは分散ファイルストアで、ファイルやその他のオブジェクトはデータのハッシュによってアドレス指定されます。問題は、ユーザーがファイルをアップロードするときに、何回複製されたのか、現在そのファイルを保管しているかキャッシュしているクライアントが引き続きそのファイルをコピーするかどうかが明確でないことです。 FileCoinは、データのコピーを保持していることを示すことができるクライアントに参加トークンの報酬を支払うことによってこれを解決し、広範なメンテナンスに必要なインセンティブを作成することを目指しています。このシステムには、クライアントが保持しているデータのコピーを使用して満足できるという課題があります。しかし、アドレス指定されていない問題は、クライアントがすべてのデータが全く複製されずに格納されている巨大なメイン​​フレームの代理であるかどうかをプロトコルが確かめることができないことです。


USCIDシステムでは、プロトコル通信中にクライアントが一意に暗号化された状態にアテステーションを作成する必要があります。たとえば、クライアントがThreshold RelayチェーンPSPで候補ブロックを生成する場合、これにはそのようなアテステーションが含まれている必要があります。ブロックがチェーンに含まれる可能性があり、報酬が返されるためには、制限された時間枠内で数秒間ブロードキャストされなければならず、ここで不正なクライアントに問題があります。アサーションは、独自の暗号化状態をランダムウォークで生成したハッシュチェーンの出力です.Threshold Relayネットワークに存在するランダムビーコンによって指定されたランダムブロックから始まり、ブロックがハッシュダイジェストに追加され、次に別のランダムブロックが選択されます、および必要な長さのランダムチェーン内のすべてのブロックのデータがすべてダイジェストに追加されるまでオンになります。ハッシュ処理が高速であるため、必要に応じて派生キーを使用してデータを暗号化する限り、アテステーションを正しく生成することは容易です。しかし、それが平文で、例えば想像される中央メインフレーム上に保持されている場合、ブロックはダイジェストに供給される前にオンザフライで暗号化する必要があります。使用される特別に設計された対称暗号化アルゴリズムの特性のために、アテステーションの生成はそれが有用であるためにはあまりにも長くかかります。




How does the BNS control the network?

The Blockchain Nervous System (BNS) has access to special op codes in the virtual machine. This allows the BNS to freeze, unfreeze and modify otherwise independent software objects (smart contracts). It can also configure the DFINITY client software run by users, for example to make them upgrade to a new version of the network protocol.

Blockchain Nervous System(BNS)は、仮想マシンの特別なオペレーションコードにアクセスできます。これにより、BNSは、独立したソフトウェアオブジェクト(スマートコントラクト)をフリーズ、フリーズ解除、修正することができます。また、ユーザーがDFINITYクライアントソフトウェアを実行するように構成することもできます。たとえば、新しいバージョンのネットワークプロトコルにアップグレードすることができます。


In what sense is the BNS an Artificial Intelligence (AI)?

DFINITY’s BNS is not a traditional AI like a neural network or Bayesian classifier. On one hand it needs input from human-controlled “neurons” to make decisions on proposals, but on the other decisions result from decentralized “follow” relationships between neurons and non-deterministic algorithmic processes. The BNS improves its ability to make decisions as neurons are reconfigured by owners when new information comes to light and feedback is received. The actual process behind decisions is unknowable: neuron follow relationships exist only in neuron client software run by users on their own computers, and the distributed state of neuron client software cannot be captured. The process is non-deterministic because timing affects the way the neurons cascade to deliver decisions. The purpose of the BNS is to leverage crowd wisdom and knowledge to decide wisely on complex proposals such as “Adopt protocol upgrade X” or “freeze contract Y”.



Basically, how does this work?

A neuron has voting power proportional to dfinities that are locked inside it. Each neuron can either vote under the direction of its owner, or alternatively automatically seek to follow the voting of other neurons whose addresses the owner configured. This is similar to longstanding “liquid democracy” concepts. In the BNS the follow relationships exist only on client computers and are unknowable, which is why the system might better be described as “opaque” liquid democracy. The BNS uses a system called “Wait For Quiet” to decide when it has received sufficient input to make a decision. Other information and algorithms can be used to assist with decision making, and “influential” neurons could potentially be driven by more traditional AI systems (whose designers are encouraged to come forwards and make proposals).

ニューロンは、内部にロックされているデンシティに比例した投票力を持っています。各ニューロンは、その所有者の指示の下で投票するか、あるいは所有者が設定したアドレスを有する他のニューロンの投票に自動的に追従することができる。これは長年の「液体民主主義」の概念に似ています。BNSでは、従業員の関係はクライアントコンピュータ上にしか存在せず、知ることができないため、システムは「不透明な」液体民主主義として記述される方がよいでしょう。BNSは、「黙って待つ」というシステムを使用して、決定を下すのに十分な入力を受け取った時点を決定します。意思決定を支援するために、他の情報やアルゴリズムを使用することができ、 “影響力のある”


Why is “opaque” important?

To see why, imagine conversely that the follow relationships between neurons and the follows that occur where knowable, and that some controversial decision was made. It might be possible to show that some particular neuron caused a cascade of follows and that it’s owner was “responsible” for the decision, resulting in social opprobrium or even legal or government action against them. In extremis, public follow relationships might result in out-of-band pressuree being applied – even by kidnapping or extortion – to the owners of neurons high in the influence graph. This would degrade the ability of the BNS to make good decisions in pursuit of its objectives.



How do I create and run a neuron?

You create a neuron by making a security deposit of dfinities. The influence of the neuron is proportional to the deposit size. Deposited dfinities can only be released by dissolving the neuron, which takes 3 months – giving neuron owners a strong incentive to help drive good decision making as bad decisions may devalue the dfinities they have locked up. Meanwhile, you can earn additional dfinities by making your neuron vote. You do this by taking the “delegate key” released when you created your neuron, and installing it into neuron client software you run on a computer (such as your laptop). This will detect and report proposals made to the BNS. Initially the neuron client will ignore proposals for a default period to provide you with a chance to direct it how to vote. However, after this time it will look at the neuron follow list you have defined for the decision category. This is a list of the addresses of other neurons, in priority order, that should be followed. Once the default period is up, your neuron will begin trying to follow other neurons rather than waiting for you. You can update your follow lists at any time. For example, if you follow a talented coder on reddit, and they advertise their neuron address, you might insert it into your follow list for technical decisions. Of course, your follow list is invisible to the world as it only exists on your computer. If you want more time to decide how a proposal should be handled, you can temporarily freeze the neuron to prevent it following automatically.



Can neurons earn me dfinities?

Yes: creating and running a neuron is “thought mining”. At the end of each Dfinity mining epoch you will receive a thought mining reward proportional to the number of dfinities you locked inside your neuron(s). The reward will be factored down by the proportion of decisions your neuron failed to vote on. But since you can configure your neuron client to follow the votes of other neurons specified by address, your neurons should reliably earn you dfinities so long as your client software runs regularly. Note that your configuration should be done carefully – as mentioned above, if the BNS makes bad decisions the value of the dfinities you have locked up in your neurons could be adversely affected.



What is the DFINITY Constitution?

The constitution is a written document that guides neuron owners regarding system objectives. Currently, the constitution directs and corrals the community around three main objectives: scheduling appropriate protocol upgrades in a timely way, reversing and mitigating hacks such as The DAO, and freezing prohibited system types. Some level of subjectivity is involved, particularly in the third objective. The initial constitution requires that systems whose primary purpose is vice or violence be frozen (note that the constitution makes no requirements regarding law, since the virtual DFINITY computer created by the decentralized network is inherently without geography and jurisdiction). The constitution makes carve-outs to clarify thinking. For example, games of pure chance can be evaluated wrt gambling, but neuron holders are directed to pass prediction markets that provide benefits to society. Similarly, a prositution exchange should frozen but a network of genuine sex therapists is explicitly ok. The constitution aims to clarify these matters. Where the constitution is not clear, ultimately it will be for the creators of systems whose status is unclear to either have the constitution amended, or persuade the community of neuron holders of their case and then take their chances with the BNS.



Can the constitution be amended?

Yes. Proposals can be submitted to the BNS to amend the Constitution. Thus ultimately the BNS decides its own objectives.



What quorums are needed for decisions to be made?

Generally speaking, the use of quorums is problematic in decentralized voting for two reasons. Firstly, it creates an edge that can be exploited – for example by last minute “ambush” voting that changes the decision outcome on a controversial proposal in a manner that gives people no chance to respond, and secondly because it is very difficult to know how many people will participate in voting. The BNS uses Wait For Quiet to address attacks related to the first issue, and because neurons can automatically follow others, is able to set quorums much higher, for example at 40%.

一般に、クォーラムの使用は、2つの理由で分散投票では問題があります。まず、悪用される可能性のあるエッジを作成します。たとえば、議論の余地のある提案に関する決定結果を人々に返答する機会を与えないように変更し、最後に、多くの人々が投票に参加します。BNSは、最初の問題に関連する攻撃に対処するためにWait For Quietを使用します。また、ニューロンが自動的に他の問題に従うことができるため、たとえば40%というように定足数をはるかに高く設定できます。


Is the BNS a “Big Brother”?

The Blockchain Nervous System is a “responsible super user” rather than a Big Brother. It can freeze forbidden system types, but the community can submit proposals to amend the Constitution if they feel something shouldn’t be forbidden. The BNS also never destroys anything – if it makes a “mistake” freezing a system it can unfreeze it later. The purpose is not to be moralistic or even to enforce the law. For example, the Constitution takes no moral view on whether a SilkRoad exchange is a good or bad thing. Its main aim – currently – is simply to create a mainstream environment that is attractive for brand-sensitive businesses as well as users generally, and pure “Code is Law” systems exist for alternative use. Nonetheless, the BNS can also simply amend the constitution to lift restrictions any time it wants, and ultimately it is for the community of neuron holders to drive how it behaves.

Blockchain Nervous Systemは、Big Brotherではなく、「責任あるスーパーユーザー」です。禁止されているシステムの種類を凍結することができますが、禁止されてはならないと感じる場合、コミュニティは憲法を改正する提案を提出することができます。BNSは決して何も破壊することはありません。システムを凍結して “間違い”を起こすと、後で解凍することができます。その目的は道徳的ではなく、法を執行することでもありません。たとえば、憲法では、SilkRoad交換が良いか悪いかについて、道徳的な見解はありません。現時点では、ブランドセンシティブなビジネスだけでなく、一般的にユーザーにとっても魅力的な主流環境を作り出すことが主な目的です。純粋な「Code is Law」システムが存在します。それにもかかわらず、BNSは、憲法を単純に修正して、必要な時に制限を解除することもできます。



How do I mine DFINITY?

You mine DFINITY by running instances of mining client software, each of which must have a “mining identity”. DFINITY mining clients are expected to supply a relatively small but approximately fixed amount of computational and storage capacity to the network. For this reason, mining operations will run many, many clients.



How do I receive rewards?

DFINITY mining is very different to proof-of-work mining where hashing puzzles are solved. In the DFINITY network, mining clients play roles processing data and are rewarded for performance of those roles. Consequently there is no need to add your clients to some kind of pooling system (this is not even possible) and each client you run will receive regular rewards as its participates in supporting the network, which it will do in various ways.



How do I create mining identities?

You must provide a “dfinities” security deposit to the Blockchain Nervous System, which is at risk if your client does not perform properly or gets hacked. The Blockchain Nervous System adjusts the current size of security deposit required to account for fluctuations in the value of dfinities and other factors.

あなたのクライアントが正しく機能しない、またはハッキングされた場合に危険にさらされるBlockchain Nervous Systemに “dfinities”保証金を提供する必要があります。Blockchain Nervous Systemは、デポジットの価値およびその他の要因の変動を説明するために必要な現金担保預金のサイズを調整します。


How are the rewards set?

In contrast to traditional decentralized networks like Bitcoin and Ethereum where new tokens are issued according to some predefined schedule, economic matters such as payment of mining rewards are subject to the Blockchain Nervous System, which wishes to create stability. It is also possible that whereas initially the DFINITY network will pay mining rewards in dfinities, eventually it will switch to using a price stable cryptofiat token such as PHI.



How can I screw up?

DFINITY uses new cryptography to hold clients to its promises. For example, the network determines whether clients have correctly maintained a unique copy of assigned state data using USCIDs (Unique State Copy IDs). Unless a client can produce a correct USCID when – for example – it creates a block, then it will not be able to claim its reward. A more serious problem would occur if a client computer became hacked, since honeypot crypto can be stolen and the client even permanently expelled from the network by the protocol if it performs a provably “Byzantine” act.

DFINITYは新しい暗号を使用してクライアントを約束します。例えば、ネットワークは、クライアントがUSCID(ユニークステートコピーID)を使用して割り当てられた状態データの一意のコピーを正しく維持したかどうかを判断する。たとえば、ブロックを作成したときにクライアントが正しいUSCIDを生成できない限り、そのブロックには報酬を請求することができません。クライアントコンピュータがハッキングされた場合、ハニーポット暗号が盗まれる可能性があり、クライアントは証明可能な “ビザンチン”の行為を実行する場合、プロトコルによってネットワークから恒久的に追放されるため、より深刻な問題が発生します。


Describe an amateur setup

You will prefer and be expected to have fast connectivity. For example, you might install 10 server machines in your basement and connect them with consumer fibre. These might host 100 mining clients.



Describe a professional setup

You might start using cloud hosting, but will prefer to migrate to bespoke arrangements to maximize profits. You will distribute these and carefully firewall them from each other to make it harder for an attacker to gain widespread access as this would result in major losses.



How can I start mining from genesis?

DFINITY Stiftung will provide a procedure where people that will be recommended genesis allocations of dfinities can assign these to mining identities in the genesis state it will propose. Before the network goes live, such miners must run special software that joins their mining identities to groups that will bootstrap DFINITY Threshold Relay (thus allowing the Copper release of the network to launch with a PSP blockchain).

DFINITY Stiftungは、推薦される人々が、デフィニティの割り当てを開始すると、提案するジェネシス状態のマイニングアイデンティティにこれを割り当てることができる手順を提供します。ネットワークが稼動する前に、そのような鉱夫は、マイニングIDをDFINITY Threshold Relayをブートストラップするグループに結合する特別なソフトウェアを実行する必要があります(したがって、ネットワークのCopperリリースをPSPブロックチェーンで起動できるようにする必要があります)。



What are “dfinities”?

Dfinities are participation tokens that in current designs perform four well-defined roles within the network:

○Fuel for running (and installing) smart contract software in the cloud.
○Security deposits for “mining identities” that allow mining client software to be joined to the network.
○Security deposits that enable “neurons” to be created that can participate in decentralized governance via the Blockchain Nervous System.
○Security deposits that allow private DFINITY cloud networks to be connected to the public network.


○Blockchain Nervous Systemを介して分散型ガバナンスに参加することができる「ニューロン」を作成できるセキュリティ預金。


What can be used as currency on DFINITY?

Although dfinities will have value and might be exchanged, the general view within DFINITY is that currency needs to be stable and should either be created by existing financial institutions using the colored coin model (where, e.g, a bank stands behind tokens that it issues into the system) or by next generation cryptofiat schemes that piggyback the economies where they are used e.g. PHI (although PHI is unlikely to be available before 2018).

デフィニティは価値があり、交換される可能性がありますが、DFINITYの一般的な見方は、通貨が安定している必要があり、既存の金融機関が有色コインモデルを使用して作成する必要があります(例えば、 システム)、あるいは使用されている国の経済を揺るがす次世代の暗号化スキーム PHI(ただし、PHIは2018年までに利用可能ではありませんが)。

どのようにしてDFN / DFNを取得できますか?

How can I get dfinities/DFN?

To acquire dfinities/DFN before the network goes live, you will need to participate in funding DFINITY Stiftung by making donations. Such donations will result in recommended allocations being made in a special smart contract on Ethereum (DFINITY will literally boot itself off Ethereum) that records part of the genesis state of the public DFINITY network. Note that DFINITY Stiftung cannot control participants in a decentralized network, and therefore, when it judges the client software is sufficiently mature to launch that the public network, can only recommend to the worldwide mining community that they use the “official” software version that boots the network from the special Ethereum smart contract system.

Note that you should only consider making donations if you wish to see the network launched and participate for your own reasons. DFN are unsuitable as a speculative investment and not intended to be used that way. Many factors including undiscovered flaws in the new theories being applied by DFINITY could lead to failure of the project making DFN participation tokens useless and thus valueless. The worldwide mining community might even ignore the allocation recommendations of DFINITY Stiftung, which is not in a position to issue DFN. Currently, due to lack of regulatory clarity DFINITY Stiftung is not planning on accepting donations from the USA. This is regrettable, but there are numerous agencies in the USA whose positions are ambiguous that might place participants in jeopardy. Please contact us directly if you have specific questions.

ネットワークが稼動する前にDFN / DFNを取得するには、DFINITY Stiftungへの寄付に参加する必要があります。そのような寄付は、公的なDFINITYネットワークの創世記の一部を記録する、Ethereumの特別なスマート契約(DFINITYは文字通りEthereumから起動する)で推奨配分を行うことになります。DFINITY Stiftungは分散ネットワークの参加者を制御することができないため、クライアントソフトウェアがパブリックネットワークを起動するのに十分成熟していると判断した場合、世界中の鉱業界にブーツとなる「公式」ソフトウェアバージョン特別なEthereumスマート契約システムからのネットワーク。

あなたが自分の理由でネットワークを立ち上げて参加したいのであれば、寄付をすることだけを考慮する必要があります。DFNは投機的な投資として不適切であり、そのように使用されることを意図していない。DFINITYによって適用されている新しい理論の発見されていない欠陥を含む多くの要因は、DFN参加トークンを役に立たず、無価値なものにするプロジェクトの失敗につながる可能性があります。世界的な鉱業界は、DFNを発行する立場にないDFINITY Stiftungの配分勧告を無視することさえあります。現在、規制の明確さが欠如しているため、DFINITY Stiftungは米国からの寄付を受け入れることを計画していません。これは残念ですが、参加者を危険にさらす可能性のある多数の機関が米国ではあいまいです。ご不明な点がございましたら、直接お問い合わせください。


What is the inflation?

In DFINITY all economic measures are subject to the Blockchain Nervous System, including inflation. Initially, it will issue new dfinities as mining rewards and thought mining rewards (provided to those running neurons). The precise amounts of dfinities issued will relate to fluctuations in the value of dfinities, whether the BNS wants to create an incentive for miners to join additional clients and other factors. Eventually though, the BNS might decide rewards should be paid using a stable currency such as PHI or some other system – since it has complete control over the protocol – effectively ending inflation. The BNS is driven by neurons, and the owners of neurons will tend to make them favor decisions that maximize the value of deposited dfinities through driving effective network operation and mass adoption.



Tweet about this on Twitter
Share on Facebook