7
$\begingroup$

I am wondering, if the simulator is not PPT, what intuitive and formal contradiction will we have? For example, if we can only have a simulator in exponential time.

This PPT condition for the simulator does not seem to come from computational hardness, since even for perfectly secure multiparty computation, I know the simulator is still required to be PPT[1].

[1]: Definition 7.6.1, Oded Goldreich, Foundations of Cryptography.

$\endgroup$

1 Answer 1

11
$\begingroup$

This is a great question, and the answer is, perhaps surprisingly, that we care about PPT simulators in perfectly secure MPC also because of computational hardness.

Concretely, if you design a perfectly secure MPC protocol, you can prove it secure with an inefficient simulator. This will be a valid proof of security for the perfectly secure protocol, but there is an important catch: perfectly secure MPC protocols exist in the model where the parties have access to ideal (private, authenticated) communication channels. The trouble is, in the real-world, you will usually instantiate these channels using public key cryptography, hence making a computational assumption. However, the security proof of your perfect MPC protocol with idealized channels does not imply the security of this real-world version (where the channels are instantiated via a public key infrastructure) if the simulator is inefficient! So if you have a perfect MPC protocol and want to use it in the real-world, you have to find a "non-computational" way to instantiate the secure authenticated channels, and that's quite inconvenient.

Mostly for this reason, it is therefore generally advocated to prove security of perfectly secure protocols with a PPT simulator. That way, you're guaranteed to remain secure if some of the "idealized components" of your protocol are later replaced with computational instantiations (typically the channels, but it can also be other idealized functionalities).

For a bit more discussion on this point, you can read the discussion starting at the end of page 85 of this paper by Asharov and Lindell, which is generally a great resource to better understand the security analysis of MPC protocols.

$\endgroup$
3
  • 1
    $\begingroup$ does that kind of reasoning apply to ZKPs simulators and extractors as well? Like: the proto is zero knowledge 'cause simulator can produce a valid transcript, but it doesn't hold in real life if simulator isn't PPT because it couldn't exist in that world? So can we generally summarize saying that simulators can/must play out of protocol rules but NOT out model constraints? $\endgroup$
    – baro77
    Commented Sep 19 at 8:00
  • 2
    $\begingroup$ In the case of zero-knowledge, you want the definition to capture the intuition that "whatever the verifier V learned from the proof, they could have computed it themselves". If V is PPT, you therefore want the simulator to be PPT (so that V could have run the simulator). With an inefficient simulator, you only get the guarantees that V did not learn more than what it could already deduce alone when running in unbounded time. But then such a V can easily find a witness! $\endgroup$ Commented Sep 19 at 8:18
  • 3
    $\begingroup$ However, note that you still get some non-trivial security guarantee with an inefficient simulator: you cannot argue that V cannot recover a witness, but you can still argue that V cannot discover which witness was used by the prover (when there are several), since the inefficient simulator could simulate without knowing which witness was used. This "zero-knowledge with inefficient simulator" property has a name: it's called witness-indistinguishability, and it is also very useful (but less powerful than standard ZK). $\endgroup$ Commented Sep 19 at 8:20

Not the answer you're looking for? Browse other questions tagged or ask your own question.