зеркало из https://github.com/microsoft/CCF.git
Update node certificate renewal documentation diagram (#3358)
This commit is contained in:
Родитель
3c2c917f65
Коммит
45fbadc02c
Различия файлов скрыты, потому что одна или несколько строк слишком длинны
До Ширина: | Высота: | Размер: 5.0 KiB |
|
@ -8,12 +8,39 @@ Since 2.x releases, the validity period of certificates is no longer hardcoded.
|
|||
Node Certificates
|
||||
-----------------
|
||||
|
||||
At startup, operators can set the validity period for a node using the ``node_certificate.initial_node_cert_validity_days`` configuration entry. The default value is set to 1 day and it is expected that members will issue a proposal to renew the certificate before it expires, when the service is open. Initial nodes certificates are valid from the current system time when the ``cchost`` executable is launched.
|
||||
At startup, operators can set the validity period for a node using the ``node_certificate.initial_validity_days`` configuration entry. The default value is set to 1 day and it is expected that members will issue a proposal to renew the certificate before it expires, when the service is open. Initial nodes certificates are valid from the current system time when the ``cchost`` executable is launched.
|
||||
|
||||
The ``start.service_configuration.maximum_node_certificate_validity_days`` configuration entry (defaults to 365 days) can be used to set the maximum allowed validity period for nodes certificates when they are renewed by members. It is used as the default value for the validity period when a node certificate is renewed but the validity period is omitted.
|
||||
|
||||
.. tip:: Once a node certificate has expired, clients will no longer trust the node serving their request. It is expected that operators and members will monitor the certificate validity dates with regards to current time and renew the node certificate before expiration. See :ref:`governance/common_member_operations:Renewing Node Certificate` for more details.
|
||||
|
||||
The procedure that operators and members should follow is summarised in the following example. A 3-node service is started by operators and the initial certificate validity period is set by ``node_certificate.initial_node_cert_validity_days`` (blue). Before these certificates expire, the service is open by members who renew the certificate for each node, via the ``set_all_nodes_certificate_validity`` proposal action, either standalone or bundled with the existing ``transition_service_to_open`` action (green). When a new node (3) joins the service, members should set the validity period for its certificate when submitting the ``transition_node_to_trusted`` proposal (pale orange). Finally, operators and members should issue a new proposal to renew soon-to-expire node certificates (red).
|
||||
The procedure that operators and members should follow is summarised in the following example. A 3-node service is started by operators and the initial certificate validity period is set by ``node_certificate.initial_validity_days`` (grey). Before these certificates expire, the service is open by members who renew the certificate for each node, via the ``set_all_nodes_certificate_validity`` proposal action, either standalone or bundled with the existing ``transition_service_to_open`` action. When a new node (3) joins the service, members should set the validity period for its certificate when submitting the ``transition_node_to_trusted`` proposal. Finally, operators and members should issue a new proposal to renew soon-to-expire node certificates (red).
|
||||
|
||||
.. image:: ../img/node_cert_renewal.svg
|
||||
.. mermaid::
|
||||
|
||||
gantt
|
||||
|
||||
dateFormat MM-DD/HH:mm
|
||||
axisFormat %d/%m
|
||||
todayMarker off
|
||||
|
||||
section Members
|
||||
Service Open By Members + set_all_nodes_certificate_validity :milestone, 01-01/15:00, 0d
|
||||
Members trust new node 3 (transition_node_to_trusted) :milestone, 01-03/15:00, 0d
|
||||
Members must renew certs before expiry (set_all_nodes_certificate_validity) :crit, 01-05/15:00, 1d
|
||||
|
||||
section Node 0
|
||||
Initial Validity Period (24h default): done, 01-01/00:00, 1d
|
||||
Post Service Open Validity Period : 01-01/15:00, 5d
|
||||
|
||||
section Node 1
|
||||
Initial Validity Period (24h default): done, 01-01/01:00, 1d
|
||||
Post Service Open Validity Period : 01-01/15:00, 5d
|
||||
|
||||
section Node 2
|
||||
Initial Validity Period (24h default): done, 01-01/02:00, 1d
|
||||
Post Service Open Validity Period : 01-01/15:00, 5d
|
||||
|
||||
section Node 3
|
||||
Initial Validity Period (24h default) : done, 01-03/00:00, 1d
|
||||
New Joiner Validity Period : 01-03/15:00, 4d
|
|
@ -2,7 +2,7 @@ Research
|
|||
========
|
||||
|
||||
:doc:`TLA+ model of CCF's Raft modifications <raft-tla>`
|
||||
CCF implements some modifications to Raft as it was originally proposed by Ongaro and Ousterhout. Specifically, CCF constrains that only appended entries that were signed by the primary can be committed. Any other entry that has not been globally committed is rolled back. Additionally, the CCF implementation introduced a variant of the reconfiguration that is different from the one proposed by the original Raft paper. In CCF CFT, reconfigurations are done via one transaction (as described :doc:`here </overview/consensus>`).
|
||||
CCF implements some modifications to Raft as it was originally proposed by Ongaro and Ousterhout. Specifically, CCF constrains that only appended entries that were signed by the primary can be committed. Any other entry that has not been globally committed is rolled back. Additionally, the CCF implementation introduced a variant of the reconfiguration that is different from the one proposed by the original Raft paper. In CCF CFT, reconfigurations are done via one transaction (as described :doc:`here </overview/consensus/1tx-reconfig>`).
|
||||
|
||||
The TLA+ model of CCF's Raft changes can be found in the `CCF GitHub repository <https://github.com/microsoft/CCF/tree/main/tla>`_.
|
||||
|
||||
|
@ -18,8 +18,10 @@ Research
|
|||
leverages trust in a consortium of governing members and in a network of replicated hardware-protected execution
|
||||
environments to achieve high throughput, low latency, strong integrity and strong confidentiality for application data
|
||||
and code executing on the ledger.
|
||||
|
||||
|
||||
.. toctree::
|
||||
:hidden:
|
||||
|
||||
raft-tla
|
||||
PAC: Practical Accountability for CCF <https://arxiv.org/abs/2105.13116>
|
||||
CCF whitepaper <https://github.com/microsoft/CCF/blob/main/CCF-TECHNICAL-REPORT.pdf>
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
TLA+ model of CCF's Raft modifications
|
||||
======================================
|
||||
|
||||
The TLA+ specification models the intended behavior of Raft as it is modified for CCF. Below, we explain several core parts of the specification in more detail.
|
||||
The TLA+ specification models the intended behavior of Raft as it is modified for CCF. Below, we explain several core parts of the specification in more detail.
|
||||
|
||||
You can find the full specification in the `CCF GitHub repository <https://github.com/microsoft/CCF/tree/main/tla>`_ and more information on TLA+ `here <http://lamport.azurewebsites.net/tla/tla.html>`_. Several good resources exist online, one good example is `this guide <https://www.learntla.com/introduction/about-this-guide/>`_.
|
||||
|
||||
|
@ -63,7 +63,7 @@ In CCF, the leader periodically signs the latest log prefix. Only these signatur
|
|||
Reconfiguration steps
|
||||
---------------------
|
||||
|
||||
The one transaction reconfiguration is already described :doc:`here </overview/consensus>`. In the TLA model, a reconfiguration is initiated by the Leader which appends an arbitrary new configuration to its own log. This also triggers a change in the ``Configurations`` variable which keeps track of all running configurations.
|
||||
The one transaction reconfiguration is already described :doc:`here </overview/consensus/1tx-reconfig>`. In the TLA model, a reconfiguration is initiated by the Leader which appends an arbitrary new configuration to its own log. This also triggers a change in the ``Configurations`` variable which keeps track of all running configurations.
|
||||
|
||||
In the following, this ``Configurations`` variable is then checked to calculate a quorum and to check which nodes should be contacted or received messages from.
|
||||
|
||||
|
|
Загрузка…
Ссылка в новой задаче