All papers (Page 2 of 24482 results)
Silentium: Implementation of a Pseudorandom Correlation Generator for Beaver Triples
Secure Multi-Party Computation is a privacy-enhancing technology that allows several parties to securely compute on distributed private data.
In the line of the well established SPDZ protocol, the by far most expensive task is the generation of Beaver triples in the so called offline phase.
Silentium is our implementation of an actively secure offline phase in the form of a Pseudorandom Correlation Generator for Beaver triples (Bt-PCG, Boyle et al. CRYPTO 2020), which, as any PCG, is designed to have low communication. Compared to previous offline phases, their Bt-PCG reduces the communication costs by three orders of magnitude. However, so far efficiency was only estimated. With Silentium, we demonstrate that their Bt-PCG can achieve even better running times than state-of-the-art offline phase implementations in the MP-SPDZ library. To actually achieve such a performance, Silentium comprises a systematic parallelization strategy and implementation-friendly decomposition scenarios of the Bt-PCG into structured modules.
Looking forward for large-scale applications on the cloud, Silentium is designed to be versatile to support hardware acceleration in future.
Nearly Optimal Parallel Broadcast in the Plain Public Key Model
Parallel Byzantine broadcast (PBC) (also known as Interactive Consistency), is a fundamental problem in distributed computing and cryptography which asks that all parties reliably distribute a message to all other parties. We give the first communication-efficient protocol for PBC in the model with plain public keys (i.e., no trusted dealer) which achieves security against an adaptive adversary that can corrupt up to $t<n/2$ parties.
Our protocol runs in total communication complexity $O(n^2\ell\log(n)+n\kappa^2\log^4(n))$ bits to succeed with probability $1-2^{-\kappa}$, where $\ell$ is the length of a message. All prior protocols either rely on a trusted setup or require at least $O(n^3)$ communication complexity. As a stepping stone, we present a binary consensus protocol with the same resilience and success probability that sends $O(n^2\kappa\log(n)+n\kappa^2\log^3(n))$ bits.
We achieve these results based on a highly efficient gossip procedure that implements echo operations at low cost, and might prove useful in deriving further efficient protocols relying on simple cryptographic tools.
Adaptive TDFs from Injective TDFs
Adaptive trapdoor functions (ATDFs) and tag-based ATDFs (TB-ATDFs) are variants of trapdoor functions proposed by Kiltz, Mohassel, and O’Neill (EUROCRYPT 2010). They are both sufficient for constructing chosen-ciphertext secure public-key encryption (CCA-secure PKE), and their definitions are closely related to CCA-secure PKE.
Hohenberger, Koppula, and Waters (CRYPTO 2020) showed that CCA-secure PKE can be constructed from injective TDFs; however, the relations among TDF, ATDF, and TB-ATDF remain unclear.
We provide black-box constructions of ATDFs and TB-ATDFs from injective TDFs, answering the question posed by Kiltz, Mohassel, and O’Neill (EUROCRYPT 2010).
Our results indicate that ATDF, TB-ATDF, and TDF are equivalent under mild restrictions.
UPKE and UKEM Schemes from Supersingular Isogenies
Forward-secure public key encryption (FS-PKE) is a key-evolving public-key paradigm that ensures the confidentiality of past encryptions even if the secret key is compromised at some later point in time. However, existing FS-PKE schemes are considerably complex and less efficient compared to standard public-key encryption. Updatable public-key encryption (UPKE), introduced by Jost et al. (Eurocrypt 2019), was designed to achieve forward security in secure group messaging while maintaining efficiency. However, existing UPKE constructions either lack post-quantum security or do not support an unbounded number of updates. We focus on isogeny-based cryptosystems due to their suitability for handling an unbounded number of updates in long-term secure messaging. Existing isogeny-based UPKE schemes lack strong security guarantees and formal security proofs. They do not support asynchronous key updates and require sender-receiver coordination.
In this work, we present two isogeny-based UPKE schemes. The first scheme, UhPKE, extends Moriya et al.’s hash-based public key encryption scheme hPKE to support key updates while the second scheme USimS is an updatable version of Fouotsa et al.’s public key encryption scheme simplified sigamal (SimS). The scheme UhPKE relies on the commutative supersingular isogeny Diffie-Hellman(CSSIDH) assumption and achieves indistinguishability under chosen randomness and chosen plaintext attack (IND-CR-CPA). The scheme USimS derives its security under the hardness of the CSSIDH problem and the commutative supersingular isogeny knowledge of exponent (CSSIKoE) problem. It is the first isogeny-based UPKE scheme that exhibits indistinguishability under chosen randomness and chosen ciphertext attack (IND-CR-CCA). The security of UhPKE and USimS is established by proving that their underlying schemes, hPKE and SimS are circular secure and leakage resilient (CS + LR). We emphasized that our constructions support an unlimited number of key updates while retaining the efficiency of their underlying public key encryption schemes. Besides, proposed UPKEs enable asynchronous key updates, allowing senders to update the public key independently. More affirmatively, UhPKE and USimS offer improved storage, computation and communication efficiency compared to existing UPKE schemes.
Furthermore, we extend and refine the security notion of the updatable key encapsulation mechanism (UKEM) introduced by Haidar et al. (Asiacrypt 2023)from the bounded number of updates to the unbounded number of updates. We present the first post-quantum secure UKEM that does not rely on zero-knowledge proofs. More precisely, we introduce two UKEM schemes which are the first of their kind in the isogeny setting. Our first scheme, UKEM1, is derived from our UhPKE and achieves IND-CR-CPA security. Our second construction, UKEM2, is based on our USimS scheme and achieves IND-CR-CCA security. We provide security for our UKEMs in our proposed enhanced security framework that supports an unbounded number of key updates. More positively, our UKEMs not only support unlimited key updates but also enable independent encapsulation and decapsulation key updates without requiring sender-receiver synchronization similar to our UPKEs. Both UKEM1 and UKEM2 exhibit compact storage and communication costs with minimal size ciphertexts while their computational efficiency differs in decapsulation and key updates where UKEM2 incurs an additional discrete logarithm computation in the decapsulation phase, but potentially offering stronger IND-CR-CCA security in contrast to UKEM1 which is IND-CR-CPA secure.
Adaptively Secure Three-Round Threshold Schnorr Signatures from DDH
Threshold signatures are one of the most important cryptographic primitives in distributed systems. Of particular interest is the threshold Schnorr signature, a pairing-free signature with efficient verification, compatible with standardized EdDSA (non-threshold) signature. However, most threshold Schnorr signatures have only been proven secure against a static adversary, which has to declare its corruptions before the protocol execution. Many existing adaptively secure constructions require either secure erasures or non-standard assumptions, such as the algebraic group model or hardness of the algebraic one-more discrete logarithm problem. The latest adaptively secure threshold Schnorr signature schemes under standard assumptions require five rounds of communication to create a single signature, limiting its practicality.
In this work, we present Gargos, a three-round, adaptively secure threshold Schnorr signature scheme based on the hardness of the decisional Diffie-Hellman (DDH) problem in the random oracle model (ROM). Our protocol supports full corruption threshold $t < n$, where $t$ is the signing threshold and $n$ is the total number of signers. We achieve our result with an enhanced proof technique that enables us to eliminate two rounds of communication from the recent Glacius scheme (Eurocrypt 2025). We believe our techniques are of independent interest to further research in improving the round complexity of multi-party signing protocols.
Reviving a Grover based Quantum Secret Sharing Scheme
Secret-sharing schemes allow a dealer to split a secret into
multiple “shares” and distribute them individually among many parties
while mandating certain constraints on its reconstruction. Such protocols
are usually executed over a secure communication channel since an
eavesdropper, after intercepting all the shares, is expected to be able
to reconstruct the secret. Leveraging the unique properties of quantum
channels, several quantum protocols have been designed for secret sharing.
However, almost all of them detect the presence of an eavesdropper
by statistical analysis of the outcome of multiple rounds, or simply require
a secure channel of communication.
We mathematically analyse the correctness and security properties of a
quantum-search based secret-sharing framework proposed by Hsu (2003)
(and attacked by Hao et al. (2010)) that was proposed as an alternative
that works over public channels and does not require multiple rounds.We
show how to improve the original protocol to be more resistant towards
eavesdropping and other attacks; however, we also prove that complete
security against an eavesdropper is not possible in this framework.
Our tight characterization will be helpful towards the construction of
more quantum secret sharing schemes based on the same framework.
Scalable Multiparty Computation from Non-linear Secret Sharing
A long line of work has investigated the design of scalable secure multiparty computation (MPC) protocols with computational and communication complexity independent of the number of parties (beyond any dependence on the circuit size). We present the first unconditionally-secure MPC protocols for arithmetic circuits over {\em large fields} with total computation $\mathcal{O}(|C|\log|F|)$, where $|C|$ and $|F|$ denote the circuit and field size, respectively.
Prior work could either achieve similar complexity only in {\em communication}, or required highly structured circuits, or expensive circuit transformations. To obtain our results, we depart from the prior approach of share packing in linear secret-sharing schemes; instead, we use an ``unpacking'' approach via {\em non-linear} secret sharing.
Adding Feeding Forward Back to the Sponge Construction
Avoiding feeding forward seems to be a major goal of the sponge construction. We make a step back and investigate adding feeding forward back to sponge. The obtained sponge-with-feeding-forward construction has a number of benefits: (1) In the random permutation model, its preimage and second preimage security bounds are much better than the standard sponge with the same capacity, while collision and indifferentiability security bounds are comparable; (2) Its collision and (second) preimage security can be reduced to well-defined properties of the underlying permutation, i.e., correlation intractability w.r.t. certain family of evasive relations.
We further incorporate several somewhat new ideas to form detailed hash and XOF constructions SpongeFwd: (1) Feeding-forward is only applied to the capacity part, and the final output(s) is the capacity part (with the rate part discarded). In this way, when $c$ equals the primary hash output size $h$, a single permutation-call suffices for squeezing. This also simplifies the underlying evasive relations for the reduction security proof. (2) We replace the hash IV with the first message block to have the best possible efficiency. (3) We employ a parallel structure to define an XOF variant. (4) We use HAIFI-style counter inputs to achieve both length-independent second-preimage security and domain separation for XOF.
The better (second) preimage security enables constructing 512-bit output hash function from Keccak-p[800]: with 512-bit capacity, its collision and (second) preimage security bounds are the same as the standard SHA-3-512, while its hardware area is reduced by roughly 38% (according to our preliminary estimation).
TEAKEX: TESLA-Authenticated Group Key Exchange
We present a highly efficient authenticated group key exchange protocol, TEAKEX, using only symmetric key primitives. Our protocol provides proven strong security, including forward secrecy, post-compromise security, and post-quantum security. For online applications we claim that TEAKEX is much simpler and more efficient than currently available alternatives. As part of our construction we also give a new abstract security definition for delayed authentication and describe its instantiation with the TESLA protocol.
On Factoring and Power Divisor Problems via Rank-3 Lattices and the Second Vector
We propose a deterministic algorithm based on Coppersmith's method that employs a rank-3 lattice to address factoring-related problems. An interesting aspect of our approach is that we utilize the second vector in the LLL-reduced basis to avoid trivial collisions in the Baby-step Giant-step method, rather than the shortest vector as is commonly used in the literature. Our results are as follows:
- Compared to the result by Harvey and Hittmeir (Math. Comp. 91 (2022), 1367–1379), who achieved a complexity of $O\left( \frac{N^{1/5} \log^{16/5} N}{(\log \log N)^{3/5}} \right)$ for factoring a semiprime $N = pq$, we demonstrate that in the balanced $p$ and $q$ case, the complexity can be improved to $O\left( \frac{N^{1/5} \log^{13/5} N}{(\log\log N)^{3/5}} \right).$
- For factoring sums and differences of powers, i.e., numbers of the form $N = a^n \pm b^n$, we improve Hittmeir's result (Math. Comp. 86 (2017), 2947–2954) from $O(N^{1/4} \log^{3/2} N)$ to $O\left( {N^{1/5} \log^{13/5} N} \right).$
- For the problem of finding $r$-power divisors, i.e., finding all integers $p$ such that $p^r \mid N$, Harvey and Hittmeir (Proceedings of ANTS XV, Res. Number Theory 8 (2022), no.4, Paper No. 94) recently directly applied Coppersmith's method and achieved a complexity of $O\left(\frac{N^{1/4r} \log^{10+\epsilon} N}{r^3}\right)$. By using faster LLL-type algorithm and sieving on small primes, we improve their result to $O\left(\frac{N^{1/4r} \log^{7+3\epsilon} N}{(\log\log N-\log 4r)r^{2+\epsilon}}\right)$. The worst case running time for their algorithm occurs when $N = p^r q$ with $q = \Theta(N^{1/2})$. By focusing on this case and employing our rank-3 lattice approach, we achieve a complexity of $O\left(\sqrt{r} N^{1/4r} \log^{5/2} N \right).$
In conclusion, we offer a new perspective on these problems, which we hope will provide further insights.
Low-Latency Dynamically Available Total Order Broadcast
This work addresses the problem of Byzantine Fault-Tolerant (BFT) Total-Order Broadcast (TOB) in a dynamically available setting, where parties can transition between online and offline states without knowing the number of active parties. Existing dynamically available protocols rely on a synchronous network assumption, which means their latency remains tied to the pessimistic network delay $\Delta$, even when the actual network delay is $\delta << \Delta$. This raises the question of whether a dynamically available BFT TOB protocol can maintain safety and liveness under synchrony while committing blocks at a rate closer to the actual network delay. We answer this question affirmatively by designing the first dynamically available BFT TOB protocol that can commit blocks at the rate of $O(\Delta_{ideal})$ where $\Delta_{ideal} < 2\delta$.
Cool + Cruel = Dual
Recently [Wenger et al.~IEEE S\&P 2025] claimed that the `Cool and Cruel' (C+C) approach to solving LWE with sparse secrets [Nolte et al.~AFRICACRYPT 2024] outperforms other approaches, in particular the well established primal attack.
In this work we show that
i.~C+C is an instantiation of a known dual attack [Albrecht, EUROCRYPT 2017], ii.~experimental evidence that the primal attack can outperform C+C in similar regimes to those studied by Wenger et al. and
iii.~both theoretical justification and experimental evidence that C+C is a consequence of a basis profile called the Z-shape.
To prove i.~we introduce a framework for dimension reduction in bounded distance decoding problems that may be of independent interest.
For ii.~we provide an open source implementation of the primal attack that is properly parametrised for short, sparse ternary secret LWE and guesses portions of the secret, along with an error analysis for a rounded variant of LWE that proves useful for practical cryptanalysis.
Given iii.~we falsify a claim of Nolte et al.
A Plausible Attack on the Adaptive Security of Threshold Schnorr Signatures
The standard notion of security for threshold signature schemes is static security, where the set of corrupt parties is assumed to be fixed before protocol execution. In this model, the adversary may corrupt up to t−1 out of a threshold of t parties. A stronger notion of security for threshold signatures considers an adaptive adversary, who may corrupt parties dynamically based on its view of the protocol execution, learning the corrupted parties’ secret keys as well as their states. Adaptive security of threshold signatures has become an active area of research recently due to ongoing standardization efforts. Of particular interest is full adaptive security, the analogue of static security, where the adversary
may adaptively corrupt a full t−1 parties.
We present a plausible attack on the full adaptive security of threshold Schnorr signature schemes with public key shares of the form $pk_i = g^{sk_i},$ where all secret keys $sk_i$ lie on a polynomial. We show that a wide range of threshold Schnorr signature schemes, including all variants of FROST, Sparkle, and Lindell’22, cannot be proven fully adaptively secure without modifications or assuming the hardness of a search problem that we define in this work. We then prove a generalization that extends below t−1 adaptive corruptions.
Post-Quantum Multi-Message Public Key Encryption from Extended Reproducible PKE
A multi-message multi-recipient Public Key Encryption (mmPKE) enables batch encryption of multiple messages for multiple independent recipients in one go, significantly reducing costs, particularly bandwidth, compared to the trivial solution of encrypting each message individually. This capability is especially critical in the post-quantum setting, where ciphertext length is typically significantly larger than the corresponding plaintext.
In this work, we first observe that the generic construction of mmPKE from reproducible PKE proposed by Bellare et al. (PKC ’03) does not apply in the lattice-based setting because existing lattice-based PKE schemes do not fit the notion of reproducible PKE. To this end, we first extend their construction by proposing a new variant of PKE, named extended reproducible PKE (XR-PKE), which enables the reproduction of ciphertexts via additional hints. However, standard lattice-based PKE schemes, such as Kyber (EuroS&P '18), do not readily satisfy the XR PKE requirements. To construct XR-PKE from lattices, we introduce a novel technique for precisely estimating the impact of such hints on the ciphertext security while also establishing suitable parameters. This enables us to instantiate the first CPA-secure mmPKE and Multi-Key Encapsulation Mechanism (mmKEM) from the standard Module Learning with Errors (MLWE) lattice assumption, named mmCipher-PKE and mmCipher-KEM, respectively. We then extend our works to the identity-based setting and construct the first mmIBE and mmIB-KEM schemes. As a bonus contribution, we explore generic constructions of adaptively secure mmPKE, achieving security against adaptive corruption and chosen-ciphertext attacks.
We also provide an efficient implementation and thorough evaluation of the practical performance of our mmCipher. Our results show that mmCipher provides significant bandwidth and computational savings in practice, compared to the state-of-the-art. For example, for 1024 recipients, our mmCipher-KEM achieves a 23~45 times reduction in bandwidth overhead, reaching within 4~9% of the plaintext length (near optimal bandwidth), while also offering a 3~5 times reduction in computational cost.
Insecurity of One Ring Signature Scheme with Batch Verification for Applications in VANETs
We show that the Negi-Kumar certificateless ring signature scheme [Wirel. Pers. Commun. 134(4): 1987-2011 (2024)] is insecure against forgery attack. The signer's public key $PK_j$ and secret key $PSK_j$ are simply invoked to compute the hash value $H_{2_j}=h_5(m_j\|PSK_j\|PK_j\|t_j)$, which cannot be retrieved by the verifier for checking their dependency. The explicit dependency between the public key and secret key is not properly used to construct some intractable problems, such as Elliptic Curve Discrete Logarithm (ECDL), Computational Diffie-Hellman (CDH), and Decisional Diffie-Hellman (DDH). An adversary can find an efficient signing algorithm functionally equivalent to the valid signing algorithm. The findings in this note could be helpful for the newcomers who are not familiar with the designing techniques for certificateless ring signature.
On the UC-(In)Security of PAKE Protocols Without the Random Oracle Model
A Password-Authenticated Key Exchange (PAKE) protocol allows two parties to jointly establish a cryptographic key, where the only information shared in advance is a low-entropy password. The first efficient PAKE protocol whose security does not rely on the random oracle model is the one by Katz, Ostrovsky and Yung (KOY, EUROCRYPT 2001). Unfortunately, the KOY protocol has only been proven secure in the game-based setting, and it is unclear whether KOY is secure in the stronger Universal Composability (UC) framework, which is the current security standard for PAKE.
In this work, we present a thorough study of the UC-security of KOY. Our contributions are two-fold:
1. We formally prove that the KOY protocol is not UC-secure;
2. We then show that the UC-security of KOY holds in the Algebraic Group Model, under the Decisional Square Diffie-Hellman (DSDH) assumption.
Overall, we characterize the exact conditions under which KOY is UC-secure. Interestingly, the DSDH assumption is stronger than DDH under which KOY can be proven game-based secure, which reveals some subtle gaps between the two PAKE security notions that have never been studied.
Kerblam — Anonymous Messaging System Protecting Both Senders and Recipients
While popular messaging apps already offer end-to-end confidentially, end-to-end metadata privacy is still far from being practical. Although several meta-data hiding systems have been developed and some like Tor have been popular, the proposed solutions lack in one or more aspects: the Tor network is prone to easy low-resourced attacks, and most others solely focus on anonymity for senders or receivers but do not both. Some recent solutions do consider end-to-end anonymity, however, they put significant restrictions on how users use the system. Particularly, the receivers must stay online or trust online servers that receive messages on behalf of receivers. This work presents a scalable end-to-end anonymity messaging system, $\mathsf{ORAM}^{-}$, that overcomes the mentioned issues and restrictions. It stems from a key observation that combining the recently-emerged oblivious message retrieval (OMR) primitive with oblivious shuffling can offer the desired end-to-end anonymity without severely restricting the number of messages a sender may send or a receiver may receive. We build our solution using two non-colluding servers and recent OMR protocol HomeRun and a compatible oblivious shuffle protocol. We then extend our solution to allow larger messages by employing a novel two-server distributed oblivious RAM technique, called $\mathsf{ORAM}^{-}$. Our performance analysis demonstrates that with the increase in the number and size of messages, the performance improvement brought by $\mathsf{ORAM}^{-}$ becomes higher. Specifically, for $2^{20}$ messages of size 1KB, our scheme only needs $5.577$ s to transmit a message.
Distance-Aware OT with Application to Fuzzy PSI
A two-party fuzzy private set intersection (PSI) protocol between Alice and Bob with input sets $A$ and $B$ allows Alice to learn nothing more than the points of Bob that are ``$\delta$-close'' to its points in some metric space $\texttt{dist}$. More formally, Alice learns only the set $\{ b\ |~\texttt{dist}{(a,b)} \leq \delta , a \in A,b\in B\}$ for a predefined threshold $\delta$ and distance metric $\texttt{dist}$, while Bob learns nothing about Alice's set. Fuzzy PSI is a valuable privacy tool in scenarios where private set intersection needs to be computed over imprecise or measurement-based data, such as GPS coordinates or healthcare data. Previous approaches to fuzzy PSI rely on asymmetric cryptographic primitives, generic two-party computation (2PC) techniques like garbled circuits, or function secret sharing methods, all of which are computationally intensive and lead to poor concrete efficiency.
This work introduces a new modular framework for fuzzy PSI, {primarily built on efficient symmetric key primitives}. Our framework reduces the design of efficient fuzzy PSI to a novel variant of oblivious transfer (OT), which we term distance-aware random OT (da-ROT). This variant enables the sender to obtain two random strings $(r_0, r_1)$, while the receiver obtains one of these values $r_b$, depending on whether the receiver’s input keyword $a$ and the sender’s input keyword $b$ are close in some metric space i.e., $\texttt{dist}{(a,b)} \leq \delta$. The da-ROT can be viewed as a natural extension of traditional OT, where the condition (choice bit) is known to the receiver. We propose efficient constructions for da-ROT based on standard OT techniques tailored for small domains, supporting distance metrics such as the Chebyshev norm, the Euclidean norm, and the Manhattan norm.
By integrating these da-ROT constructions, our fuzzy PSI framework achieves up to a $14\times$ reduction in communication cost and up to a $54\times$ reduction in computation cost compared to previous state-of-the-art protocols, across input set sizes ranging from $2^8$ to $2^{16}$. Additionally, we extend our framework to compute fuzzy PSI cardinality and fuzzy join from traditional PSI-related functionalities. All proposed protocols are secure in the semi-honest model.
NIZK Amplification via Leakage-Resilient Secure Computation
Suppose that we are given a weak \emph{Non-Interactive Zero-Knowledge} (NIZK) proof system for NP with non-negligible soundness and zero-knowledge errors, denoted by $\alpha$ and $\beta$, respectively. Is it possible to to reduce these errors to a negligible level? This problem, known as NIZK amplification, was introduced by Goyal, Jain, and Sahai (Crypto'19) and was further studied by Bitansky and Geier (Crypto'24).
The latter work provides amplification theorems for proofs and arguments, assuming the existence of one-way functions and public-key encryption, respectively. Unfortunately, their results only apply when the security level, $1 - (\alpha + \beta)$, is a constant bounded away from zero. Amplifying NIZK with an inverse polynomial security level remains an open problem and was stated as the main open question in both works.
In this work, we resolve the NIZK amplification problem and show how to amplify any non-trivial NIZK proof system that has a noticeable, inverse-polynomial level of security. As in previous works, we amplify proofs and arguments assuming the existence of one-way functions and public-key encryption, respectively. Furthermore, assuming the existence of collision-resistant hash functions, we preserve, for the first time, properties such as statistical zero-knowledge and proof succinctness.
Our main technical contribution is a new \emph{leakage-resilient secure multiparty} protocol that computes any public-output functionality with information-theoretic security against an adversary that corrupts an arbitrary subset of parties and obtains bounded leakage from each honest party. Our protocol operates in the pairwise correlated randomness model. Previous works relied on stronger setup assumptions in the form of $n$-wise correlations and either supported a smaller corruption threshold or suffered from an exponential dependency on the number of parties. To transform our protocol into a NIZK amplifier, we introduce a new intermediate notion of \emph{leakage-resilient NP secret sharing}, that may be of independent interest.
A Fast Multiplication Algorithm and RLWE-PLWE Equivalence for the Maximal Real Subfield of the $2^r p^s$-th Cyclotomic Field
This paper proves the RLWE-PLWE equivalence for the maximal real subfields of the cyclotomic fields with conductor $n = 2^r p^s$, where $p$ is an odd prime, and $r \geq 0$ and $s \geq 1$ are integers. In particular, we show that the canonical embedding as a linear transform has a condition number bounded above by a polynomial in $n$. In addition, we describe a fast multiplication algorithm in the ring of integers of these real subfields. The multiplication algorithm uses the fast Discrete Cosine Transform (DCT) and has computational complexity $\mathcal{O}(n \log n)$. This work extends the results of Ahola et al., where the same claims are proved for a single prime $p = 3$.
Fully-Homomorphic Encryption from Lattice Isomorphism
The lattice isomorphism problem (LIP) asks, given two lattices $\Lambda_0$ and $\Lambda_1$, to decide whether there exists an orthogonal linear map from $\Lambda_0$ to $\Lambda_1$. In this work, we show that the hardness of (a circular variant of) LIP implies the existence of a fully-homomorphic encryption scheme for all classical and quantum circuits. Prior to our work, LIP was only known to imply the existence of basic cryptographic primitives, such as public-key encryption or digital signatures.
Improved Private Simultaneous Messages Protocols for Symmetric Functions with Universal Reconstruction
Private Simultaneous Messages (PSM) is a kind of secure multiparty computation with minimal interaction pattern and minimal security requirement. A PSM protocol is said to be with universal reconstruction for a given function family if the algorithm of the referee (the output party) is independent of a function to be computed and the referee cannot infer the function from a protocol execution. In a recent work by Eriguchi and Shinagawa (EUROCRYPT 2025), they proposed a compiler to obtain a PSM protocol for symmetric functions from PSM protocols with universal reconstruction for symmetric functions with smaller domains. They also constructed the latter PSM protocols with universal reconstruction, by which the former PSM protocol achieves communication complexity better than the previously known protocols. In this paper, we construct the latter PSM protocols with universal reconstruction for symmetric functions more efficiently; the communication complexity is exponentially (in the input range) smaller than the protocols by Eriguchi and Shinagawa. As a consequence, we also obtain a PSM protocol (and also an ad-hoc PSM protocol and a robust PSM protocol) for symmetric functions that is more efficient than their protocol. Technically, a main ingredient of their protocols is a linear and injective encoding of histograms for the input elements, and our improvement is realized by finding a more efficient encoding of the histograms.
MOAI: Module-Optimizing Architecture for Non-Interactive Secure Transformer Inference
The advent of Large Language Models (LLM) has brought about a new wave productivity, revolutionizing business operations while keeping cost relatively low. The human-like interface of LLM enables it to be easily integrated with business functions, thereby freeing up precious human resources for more complex, valuable tasks. However, due to the intensive computation and memory requirements of LLM inference, it is preferable and cheaper to deploy LLMs with the Cloud Service Providers (CSP) that offer high performance computation resources and low-latency networking. Nevertheless, privacy concerns have been raised about the possibility of data leakage to the CSP. In this work, we seek to address such privacy concerns through the use of Fully Homomorphic Encryption (FHE). FHE enables the CSP to work on data in its encrypted form, thus ensuring that the data stay private and secure. We propose the implementation of LLM inference with FHE. While a series of prior work have demonstrated that it is possible to execute LLM inference in a private manner, it remains a challenge to design a solution that is practical.
Our contributions are as follows: We provide the first end-to-end open-source implementation of a non-interactive transformer inference with FHE. We report an amortized time of 9.6 minutes of one input with 128 tokens when evaluating the BERT model on CPU. Our packing methods for encrypted matrices remove the need to repack ciphertext between encrypted matrix multiplication and activation layers. Additionally, we introduce interleaved batching to eliminate the internal rotations during ciphertext matrix multiplications. Our approach also avoids HE rotations in evaluations of the softmax and layerNorm, leading to a speedup of 4.22× and 122× than existing works respectively. Our implementation supports arbitrary token lengths, in contrast with existing solutions that requires a full token embedding. Our implementation can be found at GitHub.
Lower Bounds on the Bottleneck Complexity of Secure Multiparty Computation
Secure multiparty computation (MPC) is a cryptographic primitive which enables multiple parties to jointly compute a function without revealing any extra information on their private inputs. Bottleneck complexity is an efficiency measure that captures the load-balancing aspect of MPC protocols, defined as the maximum amount of communication required by any party. In this work, we study the problem of establishing lower bounds on the bottleneck complexity of MPC protocols. While the previously known techniques for lower bounding total communication complexity can also be applied to bottleneck complexity, they do not provide nontrivial bounds in the correlated randomness model, which is commonly assumed by existing protocols achieving low bottleneck complexity, or they are applied only to functions of limited practical interest. We propose several novel techniques for lower bounding the bottleneck complexity of MPC protocols. Our methods derive nontrivial lower bounds even in the correlated randomness model and apply to practically relevant functions including the sum function and threshold functions. Furthermore, our lower bounds demonstrate the optimality of some existing MPC protocols in terms of bottleneck complexity or the amount of correlated randomness.
List Decoding in Private Information Retrieval: Formal Definition and Efficient Constructions
Multi-server Private Information Retrieval (PIR) is a cryptographic primitive that allows a client to retrieve an item of a database shared by multiple servers without revealing the index. This paper addresses the problem of error correction in multi-server PIR, enabling the client to obtain the item even when some servers provide incorrect responses. In a dishonest-majority setting where the majority of servers may introduce errors, it is known that the client can no longer uniquely determine the correct value. Previous approaches in this setting have typically settled for relaxed guarantees that the client can only reject incorrect values. However, these guarantees are substantially weak, as they only indicate the presence of errors without providing any information about the desired item. In this paper, we explore a more natural alternative called list-decodable PIR, which ensures that the client receives a small list of candidate values one of which is correct. We provide the first formal definition of list-decodable PIR and study its basic properties including a fundamental lower bound on the number of servers and the difficulty of simulation-based security definitions. We propose generic constructions of list-decodable PIR from any semi-honest PIR protocols, each offering different trade-offs. Our constructions improve upon the communication complexity of the only previously known protocol satisfying our definition. Furthermore, they achieve communication complexity comparable to that of the currently best known semi-honest PIR protocols.
Dynamic Security: A Realistic Approach to Adaptive Security With Applications to Strong FaF Security
Secure multiparty computation allows multiple parties to jointly compute a function while maintaining security even in the presence of malicious adversaries. There are two types of adversaries in the literature: static adversaries, which choose the parties to corrupt before the protocol begins; and adaptive adversaries, which can corrupt parties during the execution of the protocol based on the messages exchanged by the parties. While adaptive security provides a more robust security guarantee, it may require too much in certain scenarios. Indeed, the adversary must allocate some of its resources to corrupt the parties; however, certain parties might be more susceptible to corruption, for instance, if they have not updated their operating system to the latest version.
To address this, we introduce a new security notion called \emph{dynamic security}. Here, adversaries may corrupt new parties \emph{during and after} the protocol's execution, but \emph{cannot choose} targets based on the messages. A protocol is said to be $(t,h)$-dynamically secure if it is possible to simulate any adversary that can corrupt up to $t$ parties during the execution and $h$ thereafter. Dynamic security provides meaningful security for many real-world scenarios. Moreover, it circumvents known lower bounds on the communication complexity of adaptive security, allowing for more efficient protocols such as committee-based ones, which would be insecure against adaptive adversaries.
We further explore dynamic security and establish the following results.
1. We show a surprising connection between dynamic security and the seemingly unrelated notion of security with friends and foes (FaF security), introduced by Alon et al. (CRYPTO 2020), which aims to protect honest parties not only from adversaries but also against other honest parties. The notion of $(t,h)$-\emph{strong FaF security} strengthens this by requiring the simulatability of the joint view of any $t$ malicious parties alongside any $h$ honest parties to be indistinguishable from their real-world view. We show that $(t,h)$-dynamic security and $(t,h)$-strong FaF security are equivalent.
2. We consider the feasibility of $(t,h)$-dynamic security and show that every $n$-party functionality can be computed with computational $(t,h)$-dynamic security (with guaranteed output delivery) if and only if $2t+h<n$. By our previous result, this also solves an open problem left by Alon et al. on the feasibility of strong FaF security.
Security of Linear Secret Sharing Schemes with Noisy Side-Channel Leakage
Secret sharing is a foundational cryptographic primitive for sharing secret keys in distributed systems. In a classical threshold setting, it involves a dealer who has a secret, a set of $n$ users to whom shares of the secret are sent, and a threshold $t$ which is the minimum number of shares required to recover the secret. These schemes offer an all-or-nothing security approach where less than $t$ shares reveal no information about the secret. But these guarantees are threatened by side-channel attacks which can leak partial information from each share. Initiated by Benhamouda et. al. (CRYPTO'18), the security of such schemes has been studied for precise and worst-case bounded leakage models. However, in practice, side-channel attacks are inherently noisy. In this work, we propose a noisy leakage model for secret sharing, where each share is independently leaked to an adversary corrupted by additive noise in the underlying field $\mathbb{F}_q$. Under this model, we study the security of linear secret sharing schemes, and show bounds on the mutual information (MI) and statistical distance (SD) security metrics. We do this by using the MacWilliams' identity from the theory of error-correcting codes. For a given secret, it enables us to bound the the statistical deviation of the leaked shares from uniform as $\delta^t$, where $\delta$ is the Fourier bias of the added noise. Existing analyses for the security of linear $(n,t)$-threshold schemes only bound the SD metric, and show resilience for schemes with $t \ge 0.668n$. In this work, we show that these constraints are artifacts of the bounded leakage model. In particular, we show that $(n,t)$-threshold schemes over $\mathbb{F}_q$ with $t \ge \tau (n+1)$ leak $\mathcal{O}(q^{-2t(\gamma+1-1/\tau)})$ bits about the secret, given the bias of added noise satisfies $\delta \le q^{-\gamma}$. To the best of our knowledge, this is the first attempt towards understanding the side-channel security of linear secret sharing schemes for the MI metric.
The Rényi Smoothing Parameter and Its Applications in Lattice-Based Cryptography
The smoothing parameter is a cornerstone concept in lattice-based cryptography. Traditionally defined using the \( L^{\infty} \) distance, this standard formulation can be overly stringent compared to the \( L^1 \) (or statistical) distance more commonly employed in cryptographic contexts. Recent work has proposed relaxed definitions based on Kullback-Leibler (KL) divergence and \( L^1 \) distance, thereby loosening the constraints required for the distance to vanish. However, the additive nature of the \( L^1 \) distance can be limiting for cryptographic applications where probability preservation is essential. In this paper, we introduce the {Rényi smoothing parameter} of a lattice, based on Rényi divergence, to address this limitation. The advantages of Rényi divergence in cryptographic settings are well known thanks to its multiplicative nature. The Rényi smooting parameter provides a tunable framework that interpolates between the \( L^1 \) and \( L^{\infty} \) distances, offering enhanced flexibility. We present two complementary methods to study the averaging behavior of the Rényi flatness factor: one uses classical tools such as the Minkowski-Hlawka ensemble and Rogers’ formula for computing lattice function moments; the other employs Construction A lattices derived from random codes. Finally, we illustrate how this new perspective yields improvements in lattice-based cryptographic constructions.
Tighter Quantum Security for Fiat-Shamir-with-Aborts and Hash-and-Sign-with-Retry Signatures
We revisit the quantum security (in the QROM) of digital signature schemes that follow the Fiat-Shamir-with-aborts (FSwA) or the probabilistic hash-and-sign with retry/abort (HSwA) design paradigm. Important examples of such signature schemes are Dilithium, SeaSign, Falcon+ and UOV. In particular, we are interested in the UF-CMA-to-UF-NMA reduction for such schemes. We observe that previous such reductions have a reduction loss that is larger than what one would hope for, or require a more stringent notion of zero-knowledge than one would hope for.
We resolve this matter here by means of a novel UF-CMA-to-UF-NMA reduction that applies to FSwA and HSwA signature schemes simultaneously, and that offers an improved reduction loss (without making the zero-knowledge assumption more stringent).
AsconAEAD128 Revisited in the Multi-user Setting
After more than half a decade since its initiation, NIST declared Ascon as the winner of the LwC competition. In the first public draft of AsconAEAD128, NIST recognized that Ascon has limitations when used in multi-user applications. To mitigate this, NIST prescribed the use of a \(256\)-bit key in multi-user applications and produced an instantiation on how to process this extra key size in the current AsconAEAD128 API. While doing so, they identified a limitation of this new scheme (which we refer to as mu-Ascon in this document): mu-Ascon is vulnerable to committing attack and hence cannot be used in cases where committing security is required. On the other hand, the full key-binding property in Ascon, which separated it from other sponge-type constructions, has been used to show that Ascon is much stronger in the sense that it presents a key recovery resistance even in the case where some intermediate state is recovered. We remark that the current mu-Ascon has the limitation that only a partial key is bound during initialization and finalization. In this work, we propose some alternative instantiations of AsconAEAD128 API for multi-user applications. In comparison with the current mu-Ascon proposal, our first construction Ascon-256.v2 guarantees CMT-4 committing security up to 64 bits, and our second construction Ascon-256.v3 leads to both CMT-4 committing security and full 256-bit key binding. Structurally, our instantiations use only an extra-permutation call to provide these extra security features compared to mu-Ascon, which has a negligible overhead in terms of performance (given the lightweight nature of the Ascon permutation).
LP2+: a robust symmetric-key AKE protocol with perfect forward secrecy, and an advocacy for thorough security proofs
Symmetric-key authenticated key establishment (AKE) protocols are particularly well suited in resource constraint environments such as internet of things (IoT) devices. Moreover, they often rely on better understood assumptions than asymmetric ones. In this paper, we review the security model for symmetric-key AKE protocols. We show why several existing models allow trivial attacks while they do not protect against some non-trivial ones. We fix these issues with our new security definitions.
We show that the protocols $\textrm{LP2}$ and $\textrm{LP3}$ of Boyd et al. do not satisfy the claimed security properties. We propose a new 2-message protocol based on them, called $\textrm{LP2+}$. This protocol is proved to satisfy correctness, weak synchronization robustness, entity authentication, key indistinguishability and, as a consequence, it admits perfect forward secrecy. An instantiation of $\textrm{LP2+}$ is presented, whose security only relies on that of a pseudo-random function (PRF). Its total execution time in normal cases is dominated by only 14 evaluations of the PRF, making it a lightweight protocol that is particularly well suited for resource-constrained environments such as IoT devices.
The flaws found in the security models as well as in the security arguments could have been avoided with precise and detailed proofs. We thus take this paper as an opportunity to advocate for thorough security proofs. Therefore, we have made the choice of rigor over concision.
Simulatability SOA Does Not Imply Indistinguishability SOA in the CCA Setting
Contrary to expectation, we show that simulation-based selective-opening security (SSO) does not imply indistinguishability-based selective opening security (ISO) in the CCA setting, making them incomparable in the presence of either encryption randomness leakage (sender opening) or secret key leakage (receiver opening). This contrasts the CPA case, where SSO-CPA is known to be strictly stronger than ISO-CPA in the presence of sender and/or receiver opening. Our separation result holds relative to all message distributions with sufficiently high min-entropy. On the other hand, restricting to message distributions with low enough min-entropy gives rise to an implication.
Our separation result does not rely on the presence of selective openings. At a glance, this may seem to contradict known equivalence results between indistinguishability, semantic security, and selective opening security under trivial openings. We reconcile the apparent contradiction by showing that the selective-opening CCA landscape splits into a “high-entropy” and a “low-entropy” world which must be considered separately.
Algebraic Cryptanalysis of AO Primitives Based on Polynomial Decomposition Applications to Rain and Full AIM-IIIIV
The LowMC-based post-quantum signature scheme Picnic was selected as a third-round candidate for NIST PQC, attracting wide attention to the design of efficient and secure post-quantum signature schemes using Symmetric Techniques for Advanced Protocols (STAP). Symmetric primitives designed for advanced protocols such as secure multi-party computation (MPC), fully homomorphic encryption (FHE), and zero-knowledge (ZK) proof systems, with the goal of reducing the number of multiplication operations, are referred to as arithmetic-oriented (AO) primitives. These cryptographic primitives are typically constructed over large finite fields, which makes classical statistical analysis methods like differential and linear cryptanalysis inefficient. Due to their inherent algebraic properties, the mainstream security evaluation approaches are based on algebraic attacks. In this paper, we analyze the security of the MPC-friendly primitives \textsc{Rain} (CCS 2022) and AIM (CCS 2023) used in the post-quantum signature schemes Rainier and AIMer. Existing algebraic attacks on \textsc{Rain} and AIM were conducted over $\mathbb{F}_2$. We propose a novel algebraic attack over $\mathbb{F}_{2^n}$ that uses the polynomial decomposition to reduce degrees of equations. By further combining with the guess-and-determine technique, meet-in-the-middle modeling, and resultant, we are able to attack \textsc{Rain} and the full AIM. Specifically, we successfully attacked 2-round \textsc{Rain} with $2^{73.7}/2^{107.0}/2^{138.9}$ primitive calls, 3-round \textsc{Rain} with $2^{160.6}/2^{236.0}/2^{311.1}$ primitive calls, for the $128/192/256$-bit key. For the full AIM, we successfully attacked it with $2^{114.0}/2^{163.2}/2^{228.3}$ primitive calls for the $128/192/256$-bit key. The attack complexities mainly lie in solving univariate polynomial equations and computing resultants, and hence the complexity evaluations are accurate.
Formal Security and Functional Verification of Cryptographic Protocol Implementations in Rust
We present an effective methodology for the formal verification of
practical cryptographic protocol implementations written in
Rust. Within a single proof framework, we show how to develop
machine-checked proofs of diverse properties like runtime safety,
parsing correctness, and cryptographic protocol security. All
analysis tasks are driven by the software developer who writes
annotations in the Rust source code and chooses a backend prover for
each task, ranging from a generic proof assistant like F$\star$ to
dedicated crypto-oriented provers like ProVerif and SSProve Our
main contribution is a demonstration of this methodology on Bert13,
a portable, post-quantum implementation of TLS 1.3 written in Rust
and verified both for security and functional correctness. To our
knowledge, this is the first security verification result for a
protocol implementation written in Rust, and the first verified
post-quantum TLS 1.3 library.
Collision Attacks on Reduced RIPEMD-128
RIPEMD-128 is an ISO/IEC standard hash function based on a double-branch Merkle-Damgård structure. Its compression function includes two branches with distinct Boolean functions and message expansion permutations. To perform a collision attack, differential characteristics must be constructed simultaneously for both branches under the same message word difference, and the message modification order must align with conditions in both branches. These factors make collision attacks on (reduced) RIPEMD-128 highly challenging.
In 2014, an attack on 40 steps of RIPEMD-128 was achieved by Wang with no state differences in round 3. In this work, we analyze message permutation properties and propose two new structures for creating message differences. These structures enable high-probability local collisions in both branches of round 3, extending the attack to more steps. Notably, the second structure can eliminate all state differences in round 3, allowing the attack to cover more than three whole rounds.
To ensure practical attacks, we limit the number of conditions based on our message modification strategy and use multi-step message modification techniques to control more conditions. As a result, we successfully generate colliding message pairs for 46-step and 54-step reduced RIPEMD-128, with time complexities of approximately $2^{42}$ and $2^{54}$, respectively.
Multi-Party Distributed Point Functions with Polylogarithmic Key Size from Invariants of Matrices
Distributed point functions (DPFs), introduced in 2014, are a widely used primitive in secure computation for a wide variety of applications. However, until now, constructions for DPFs with polylogarithmic-size keys have been known only for the two-party setting. We propose a scheme for a polylogarithmic-size DPF for an arbitrary number of parties. We use a technique where a secret-shared vector is mapped to collinear vectors by public matrices serves as an invariant for off-path leaves. We show, using a technique by Shamir, that when we work over Z_pq , these vectors are hard to compute if factoring is hard.
We also show that our scheme is a secure DPF, provided that two new assumptions hold, one of which is related to Generic Group Model and the other to MinRank. The output of our scheme is in the exponent in some group where Diffie-Hellman type problems are hard. Although this limits the usability of our scheme, we believe that our scheme is the first distributed point function for more than two parties with a key size that is polylogarithmic in the size of the domain and that does not use fully homomorphic encryption.
A Novel Leakage Model in OpenSSL’s Miller-Rabin Primality Test
At Crypto 2009, Heninger and Shacham presented a branch-and-prune algorithm for reconstructing RSA private keys given a random fraction of its private components. This method is widely adopted in side-channel attacks, and its complexity is closely related to the specific leakage pattern encountered. In this work, we identified a novel leakage model in the Miller-Rabin primality test implemented in OpenSSL. Under certain side-channel attacks against fixed-window modular exponentiation (e.g., recovering the least significant $b$ bits from each window), the proposed model enables staggered recovery of bits in $p$ and $q$, reducing uncertainty in key reconstruction. In particular, this model includes previously undocumented scenarios where full key recovery is achievable without branching. To understand how the proposed leakage model could contribute to attacks on modular exponentiation, we investigated the global and local behavior of key reconstruction. Our evaluation demonstrates that the proposed scenarios enable more efficient key reconstruction and retain this advantage when additional erasure bits are introduced. Moreover, in specific cases, successful reconstruction remains achievable within practical time even if the bits obtained are less than 50%. Finally, we conducted a series of experiments to confirm the practicality of our assumption, successfully recovering the lower 4 bits from each 6-bit window.
The Large Block Cipher Family Vistrutah
Vistrutah is a large block cipher with block sizes of 256 and 512 bits. It iterates a step function that applies two AES rounds to each 128-bit block of the state, followed by a state-wide cell permutation. Like Simpira, Haraka, Pholkos, and ASURA, Vistrutah leverages AES instructions to achieve high performance.
For each component of Vistrutah, we conduct a systematic evaluation of functions that can be efficiently implemented on both Intel and Arm architectures. We therefore expect them to perform efficiently on any recent vector instruction set architecture (ISA) with AES support. Our evaluation methodology combines latency estimation on an abstracted vector ISA with security analysis. The goal is to maximize the ratio
of "bits of security per unit of time", i.e., to achieve the highest security for a given performance target, or equivalently, the best performance for a given security level within this class of designs. Implementations confirm the accuracy of our latency model. Vistrutah even performs significantly better than Rijndael-256-256.
We support our security claims with a comprehensive ad-hoc cryptanalysis. An isomorphism between Vistrutah-512, the 512-bit wide variant, and the AES, allows us to also leverage the extensive cryptanalysis of AES and apply it to Vistrutah-512.
A core design principle is the use of an inline key schedule: all round keys are computed during each encryption or decryption operation without requiring memory storage. In fact, rekeying has no associated overheads. Key schedules like the AES’s must precompute and store round keys in memory for acceptable performance. However, in 2010 Kamal and Youssef showed this makes cold boot attacks more effective. Vistrutah’s approach minimizes leakage to at most one value during context switches. Furthermore, expensive key schedules reduce key agility, limiting the design of modes of operation.
Vistrutah is particularly well-suited for Birthday-Bound modes of operation, including Synthetic IV modes and Accordion modes for 256-bit block ciphers. It can serve as a building block for compression functions (such as Matyas-Meyer-Oseas) in wide Merkle-Damgard hash functions. Additionally, it can implement "ZIP" wide pseudo-random functions as recently proposed by Florez-Gutierrez et al. in 2024.
Finally, we present short, i.e., reduced-round versions of Vistrutah which are analyzed taking into account the restrictions posed on attackers by specific modes of operation. In particular, we model the use of the block ciphers in Hash-Encrypt-Hash (HEH) constructions such as HCTR2 as well as in ForkCiphers. These short versions of Vistrutah can be used to accelerate modes of operation without sacrificing security.
Incompressible Encryption with Everlasting Security
Recently, the concept of incompressible encryption has emerged as a powerful enhancement to key-leakage resilience. In an incompressible encryption scheme, an adversary who intercepts ciphertexts is forced to dedicate a significant amount of memory to store them in full if they wish to extract any information about the plaintext later when the secret key becomes available. Given two messages, the security game involves two adversaries: the first adversary receives an encryption of one of the messages and produces a compressed state. Then, the second adversary, given both the secret key and the compressed state, attempts to determine which message was encrypted.
Several positive results exist in incompressible cryptography. On the one hand, there are constructions based on minimal assumptions but with a poor rate (i.e., rate tends to 0). On the other hand, there are rate-1 constructions that achieve optimal efficiency but rely on strong cryptographic assumptions, such as obfuscation.
A stronger security notion, known as everlasting security, has been proposed for incompressible encryption. In this formulation, the second adversary, who receives the compressed state and the secret key, is allowed to be computationally unbounded. While this notion is conceptually appealing, no constructions of everlasting incompressible encryption are currently known, regardless of the underlying assumption or even in idealized models.
In this work, we give the first construction of everlasting incompressible encryption. In fact, we show that everlasting incompressible encryption is inherent in any sufficiently secure public-key encryption scheme. Specifically, we prove that any public-key encryption scheme with subexponential security (when instantiated with an appropriate security parameter) already satisfies the definition of everlasting incompressible encryption with subexponential security. Furthermore, our scheme achieves rate-1, improving upon existing results even for the weaker notion of standard incompressible encryption.
OptAttest: Verifying Multi-List Multi-Hop History via a Hybrid Zero-Knowledge Architecture
To prevent privacy-preserving digital assets from becoming instruments of despotism via unitary-executivist compliance regimes, we propose OptAttest, a hybrid zero-knowledge architecture. This system empowers users to optionally generate verifiable attestation history for the current (Hop 0) and immediately preceding (Hop 1) transactions involving their private commitments. For crucial 0-hop multi-list attestations, users employ Zero-Knowledge Proofs (ZKPs) of claims from selected Verifiable Credentials (VCs). Users achieve per-transaction efficiency with diverse VC types by pre-computing and caching proofs of their VC validity. This approach avoids mandated adherence to singular, fallible external standards. Opted-in lightweight updates create cryptographic accumulator summaries, verified by network infrastructure (e.g., Layer 2 scaling solutions using Zero-Knowledge Virtual Machines), and are paired with user-managed Intermediate Attestation Data Packets (IADPs) containing detailed evidence. For comprehensive verification, users can then generate full recursive proofs from these IADPs for their attestation-enabled funds, leveraging native zkVM recursion. The protocol facilitates optional attestation generation, not enforcement, allowing downstream policy application. Aiming to cultivate a permissionless ethos, we propose a user-centric balance between privacy and verifiable accountability, distinct from models compelling broader data access. Folding schemes are noted as potential future enhancements for recursive proof efficiency.
On Proving Equivalence Class Signatures Secure from Non-interactive Assumptions
Equivalence class signatures (EQS), introduced by Hanser
and Slamanig (AC’14, J.Crypto’19), sign vectors of elements from a bi-
linear group. Their main feature is “adaptivity”: given a signature on a
vector, anyone can transform it to a (uniformly random) signature on any
multiple of the vector. A signature thus authenticates equivalence classes
and unforgeability is defined accordingly. EQS have been used to improve
the efficiency of many cryptographic applications, notably (delegatable)
anonymous credentials, (round-optimal) blind signatures, group signa-
tures and anonymous tokens. EQS security implies strong anonymity
(or blindness) guarantees for these schemes which hold against malicious signers without trust assumptions.
Unforgeability of the original EQS construction is proven directly in
the generic group model. While there are constructions from standard
assumptions, these either achieve prohibitively weak security notions
(PKC’18) or they require a common reference string (AC’19, PKC’22),
which reintroduces trust assumptions avoided by EQS.
In this work we ask whether EQS schemes that satisfy the original secu-
rity model can be proved secure under standard (or even non-interactive)
assumptions with standard techniques. Our answer is negative: assum-
ing a reduction that, after running once an adversary breaking unforge-
ability, breaks a non-interactive computational assumption, we construct
efficient meta-reductions that either break the assumption or break class-
hiding, another security requirement for EQS.
Generalized BGV, BFV, and CKKS for Homomorphic Encryption over Matrix Rings
Some of the most valuable applications of homomorphic encryption, such as encrypted machine learning inference, require efficient large-scale plaintext-ciphertext and ciphertext-ciphertext matrix multiplications. Current state-of-the-art techniques for matrix multiplications all build on the ability to pack many ciphertexts into a ciphertext and compute on them in a Single Instruction, Multiple Data (SIMD) manner. However, to fit the operation of matrix multiplication into this computational model, a large number of additional costly operations need to be performed, such as the rotation of elements between the plaintext slots.
In this work, we propose an orthogonal approach to performing encrypted matrix operations with BGV-like encryption schemes, where the plaintext and ciphertext spaces are generalized to a matrix ring of arbitrary dimension. To deal with the inherent problem of noncommutativity in the case of matrix rings, we present a new superoperator technique to better represent linear and quadratic expressions in the secret key, which allows for the relinearization of ciphertexts after multiplication. The security of the modified encryption schemes is based on Module-LWE with module rank equal to the dimension of the matrices. With this construction, we demonstrate that Ring-LWE, Module-LWE, and LWE are potentially equally efficient for homomorphic encryption, both in terms of useful information density and noise growth, only for different sizes of matrices.
Sabot: Efficient and Strongly Anonymous Bootstrapping of Communication Channels
Anonymous communication is vital for enabling individuals to participate in social discourse without fear of marginalization or persecution. An important but often overlooked part of anonymous communication is the bootstrapping of new communication channels, generally assumed to occur out-of-band. However, if the bootstrapping discloses metadata, communication partners are revealed even if the channel itself is fully anonymized. We propose Sabot, the first anonymous bootstrapping protocol that achieves both strong cryptographic privacy guarantees and bandwidth-efficient communication. In Sabot, clients cooperatively generate a private relationship matrix, which encodes who wants to contact whom. Clients communicate with k ≥ 2 servers to obtain “their” part of the matrix and augment the received information using Private Information Retrieval (PIR) to learn about their prospective communication partners. Compared to previous solutions, Sabot achieves stronger privacy guarantees and reduces the bandwidth overhead by an order of magnitude.
How to Verify that a Small Device is Quantum, Unconditionally
A proof of quantumness (PoQ) allows a classical verifier to efficiently test if a quantum machine is performing a computation that is infeasible for any classical machine. In this work, we propose a new approach for constructing PoQ protocols where soundness holds unconditionally assuming a bound on the memory of the prover, but otherwise no restrictions on its runtime. In this model, we propose two protocols:
1. A simple protocol with a quadratic gap between the memory required by the honest parties and the memory bound of the adversary. The soundness of this protocol relies on Raz's (classical) memory lower bound for matrix inversion (Raz, FOCS 2016).
2. A protocol that achieves an exponential gap, building on techniques from the literature on the bounded storage model (Dodis et al., Eurocrypt 2023).
Both protocols are also efficiently verifiable. Despite having worse asymptotics, our first protocol is conceptually simple and relies only on arithmetic modulo 2, which can be implemented with one-qubit Hadamard and CNOT gates, plus a single one-qubit non-Clifford gate.
Decentralized Data Archival: New Definitions and Constructions
We initiate the study of a new abstraction
called incremental decentralized data archival (${\sf iDDA}$).
Specifically, imagine that there is an ever-growing, massive database such as a blockchain, a comprehensive human knowledge base like Wikipedia, or the Internet archive. We want to build a decentralized archival of such datasets
to ensure long-term robustness and sustainability.
We identify several important properties
that an ${\sf iDDA}$ scheme should satisfy. First,
to promote heterogeneity and decentralization,
we want to encourage even weak nodes with limited space (e.g., users' home computers) to contribute. The minimum space requirement to contribute should be approximately independent of the data size. Second, if a collection of nodes together receive rewards commensurate with contributing a total of $m$ blocks of space, then we want the following reassurances: 1) if $m$ is at least the database size, we should be able to reconstruct the entire dataset; and 2) these nodes should actually be commiting roughly $m$ space in aggregate --- even when $m$ is much larger than the data size, the nodes should be storing redundant copies of the database rather than storing just one copy, and yet impersonating arbitrarily many pseudonyms to get unbounded rewards.
We propose new definitions that mathematically formalize the aforementioned requirements of an ${\sf iDDA}$ scheme.
We also devise an efficient construction in the random oracle model which satisfies the desired security requirements. Our scheme incurs only $\widetilde{O}(1)$ audit cost, as well as $\widetilde{O}(1)$ update cost for both the publisher and each node, where $\widetilde{O}(\cdot)$ hides polylogarithmic factors. Further, the minimum space provisioning required to contribute is as small as polylogarithmic.
Our construction exposes several interesting technical challenges. Specifically, we show that a straightforward application of the standard hierarchical data structure fails, since both our security definition and the underlying cryptographic primitives we employ lack the desired compositional guarantees. We devise novel techniques to overcome these compositional issues, resulting in a construction with provable security while still retaining efficiency. Finally, our new definitions also make a conceptual contribution, and lay the theoretical groundwork for the study of ${\sf iDDA}$. We raise several interesting open problems along this direction.
Learning with Alternating Moduli, Arora-Ge over Composite Moduli, and Weak PRFs
In TCC 2018, Boneh, Ishai, Passelègue, Sahai, and Wu propose candidates of weak and strong PRFs by evaluating linear functions over coprime moduli alternatively. Such PRFs can be evaluated by low-depth circuits and are MPC-friendly. However, they have not been able to base the security of their PRFs on well-formed assumptions other than assuming that the PRF constructions themselves are secure.
In this paper, we formalize a new assumption called Learning with Alternating Moduli (LAM). We show that over certain large moduli, the LAM assumption is as hard as the Learning with Errors (LWE) assumption. For LAM over constant moduli, we do not know how to base its hardness on the LWE assumption. Instead, we provide
(i) polynomial-time attacks on LAM with constant prime-power moduli and certain constant non-prime-power moduli, and
(ii) evidence of the sub-exponential hardness of LAM with other moduli by analyzing the effect of typical attacks.
More specifically, we put forward two new attacks. The first attack is a recursive algorithm that solves LWE with certain constant composite moduli and error distributions. The algorithm extends the Arora-Ge algorithm for LWE from prime moduli to composite moduli, and it also solves LAM for certain parameters. The second attack is a polynomial-time attack that rules out the existence of weak PRFs in $\mathsf{NC}^0[p]$ for any prime $p$.
Based on our studies, we propose candidate weak PRFs in $\mathsf{NC}^0[p_1,p_2]$ for some distinct primes $p_1,p_2$ based on LAM over constant moduli, or the Learning with Rounding (LWR) assumption over constant moduli. Compared to the weak PRF candidates by Boneh et al., our weak PRF candidates live in the same complexity class while having the advantage of being based on well-formed assumptions.
Registered Functional Encryption for Pseudorandom Functionalities from Lattices: Registered ABE for Unbounded Depth Circuits and Turing Machines, and More
Registered functional encryption (RFE) is a generalization of public-key encryption that enables computation on encrypted data (like classical FE), but without needing a central trusted authority. Concretely, the users choose their own public keys and register their keys together with a function with an (untrusted) key curator. The key curator aggregates all of the individual public keys into a short master public key, which serves as the public key of the FE scheme.
Currently, we only know RFE constructions for restricted functionalities using standard assumptions, or for all circuits using powerful tools such as indistinguishability obfuscation, and only in the non-uniform model. In this work, we make progress on this front by providing the first lattice-based constructions of RFE for pseudorandom functionalities, where the model of computation is either non-uniform (unbounded depth circuits) or uniform (Turing machines). Intuitively, we call a functionality pseudorandom if the output of the circuit is indistinguishable from uniform for every input seen by the adversary. Security relies on LWE and a recently introduced primitive called pseudorandom FE (prFE), which currently can be instantiated from evasive LWE.
We illustrate the versatility of these new functionalities for RFE by leveraging them to achieve key-policy and ciphertext-policy registered attribute-based encryption and registered predicate encryption schemes (KP-RABE, CP-RABE and RPE) for both unbounded depth circuits and Turing machines. Existing RABE constructions support only bounded depth circuits, and prior to our work there neither existed RABE for uniform models of computation nor RPE. As an appealing feature, all our constructions enjoy asymptotic optimality in the sense that their parameters depend neither on the length of public attributes nor the size of policies.
Along the way, we can also improve on the state-of-the-art for classical attribute-based encryption (ABE) and predicate encryption (PE). Specifically, we obtain new constructions for KP-ABE, CP-ABE and PE for Turing machines with optimal asymptotic parameters. For KP-ABE, this is an in improvement in terms of efficiency, whereas for CP-ABE and PE we are not aware of any prior purely lattice-based construction supporting Turing machines.
Multiparty Homomorphic Secret Sharing and More from LPN and MQ
We give the first constructions of multiparty pseudorandom correlation generators, distributed point functions, and (negligible-error) homomorphic secret sharing for constant-degree polynomials for any number of parties without using LWE or iO. Our constructions are proven secure under the combination of LPN with dimension $n$, $2n$ samples, and noise rate $n^{\varepsilon-1}$ for a small constant $\varepsilon$, and MQ with $n$ variables and $n^{1+\delta}$ equations.
As applications of our results, we obtain from the same assumptions secure multiparty computation protocols with sublinear communication and silent preprocessing, as well as private information retrieval for $M$ servers and size-$\lambda^d$ databases with optimal download rate and client-to-server communication $M^d\cdot \lambda^3$.
Multiparty FHE Redefined: A Framework for Unlimited Participants
Multiparty fully homomorphic encryption (MPFHE) is a generalization of (multi-key) fully homomorphic encryption ((MK)FHE) that lives on the cusp between multiparty computation (MPC) and FHE, enabling a computation over encrypted data using multiple keys. However, contrary to MKFHE it seeks to reduce the noise inflation based on the number of parties by allowing the parties to first compute shared data in MPC before executing the computation in FHE. Generally, MPFHE protocols have required ad-hoc constructions and adaptations to already existing protocols. In this work we present a new framework that standardizes the approach of MPFHE to allow the use of a broad spectrum of MPC and FHE protocols, while eliminating the noise inflation based on the participating number of parties. This presents the first ever multiparty FHE protocol which allows an arbitrary number of participants. We then show a case study of this using the FINAL scheme and show that we reduce the required key material by 40-99.9% compared to the MKFHE FINAL scheme, FINALLY, 8-71% compared to the AKÖ scheme, and 65-70% compared to the Park-Rovira scheme. Moreover, we reduce the bootstrapping time for the AKÖ, Park-Rovira, and KMS schemes by 75-99.7%.
TOOP: A transfer of ownership protocol over Bitcoin
We present the Transfer of Ownership Protocol (TOOP). TOOP solves a limitation of all existing BitVM-like protocols (and UTxO blockchains at large) that restricts the unlocking transfers to addresses known and preregistered during lock and setup. Accordingly, our protocol avoids the financially costly, regulatory problematic, and congestion-prone front-and-reimburse paradigm.
Furthermore, we note that one of the main applications of TOOP is as an enabler of secure transfer of assets between UTxO blockchains, and back. We showcase this via sketching a committee-based validation protocol that requires only 1-out-of-n honest security. This protocol operates in distinct phases: the lock phase, where the initial setup and individual assets are locked on Bitcoin, and the unlocking with the ownership transfer phase, where the asset is transferred to a possibly different legitimate owner.
This cross-chain bridge protocol, where TOOP plays a key role, is being formalized in concurrent work, and has been implemented for the first time in Cardinal, a protocol for wrapping Bitcoin Unspent Transaction Outputs (UTxOs) onto the Cardano blockchain, with Bitcoin Ordinals represented as Cardano Non-Fungible Tokens (NFTs).
Permutation-Based Hashing with Stronger (Second) Preimage Resistance - Application to Hash-Based Signature Schemes
The sponge is a popular construction of hash function design. It operates with a $b$-bit permutation on a $b$-bit state, that is split into a $c$-bit inner part and an $r$-bit outer part. However, the security bounds of the sponge are most often dominated by the capacity $c$: If the length of the digest is $n$ bits, the construction achieves $\min\{n/2,c/2\}$-bit collision resistance and $\min\{n,c/2\}$-bit second preimage resistance (and a slightly more complex but similar bound for preimage resistance). In certain settings, these bounds are too restrictive. For example, the recently announced Chinese call for a new generation of cryptographic algorithms expects hash functions with 1024-bit digests and 1024-bit preimage and second preimage resistance, rendering the classical sponge design basically unusable, except with an excessively large permutation. We present the SPONGE-DM construction to salvage the sponge in these settings. This construction differs from the sponge by evaluating the permutation during absorption in a Davies-Meyer mode. We also present SPONGE-EDM, that evaluates potentially round-reduced permutations during absorption in Encrypted Davies-Meyer mode, and SPONGE-EDM$^c$, that optimizes the amount of feed-forward data in this construction. We prove that these constructions generically achieve $\min\{n/2,c/2\}$-bit collision resistance as the sponge does, but they achieve $n$-bit preimage resistance and $\min\{n,c-\log_2(\alpha)\}$-bit second preimage resistance, where $\alpha$ is the maximum size of the first preimage in blocks. With such constructions, one could improve the security (resp., efficiency) without sacrificing the efficiency (resp., security) of hash-based signature schemes whose security relies solely on the (second) preimage resistance of the underlying hash functions. Also, one could use the $1600$-bit Keccak permutation with capacity $c=1088$ and rate $r=512$ to achieve $512$-bit collision resistance and $1024$-bit preimage and second preimage resistance, without making extra permutation calls. To encourage further cryptanalysis, we propose two concrete families of instances of SPONGE-EDM (expected to be weaker than SPONGE-DM), using SHA3 and Ascon. Moreover, we concretely demonstrate the security and performance advantages of these instances in the context of hashing and hash-based signing.
An almost key-homomorphic post-quantum block cipher with key rotation and security update for long-term secret storage
In this paper, we propose a new block cipher primitive, based on ring-LWE, which allows key rotation with a possible security update. This makes it possible to double the security of the ciphertext with each key rotation. Our scheme could therefore be used for long-term secret storage, allowing the security of the ciphertext to be adapted to the attacker's computing power, without the need for decryption.
We propose an implementation of our cryptographic scheme and prove its security.
Addendum to How Small Can S-boxes Be?
In ToSC 2025(1), Jia et al. proposed an SAT-aided automatic search tool for the S-box design. A part of the functionality of this tool is to search for implementations of an S-box with good area and gate-depth complexity. However, it is well-known that the gate depth complexity cannot precisely reflect the latency of an implementation. To overcome this problem, Rasoolzadeh introduced the concept of latency complexity, a more precise metric for the latency cost of implementing an S-box than the gate depth complexity in the real world.
In this addendum, we adapt Jia et al.'s tool to prioritize latency as the primary metric and area as the secondary metric to search for good implementations for existing S-boxes. The results show that the combination of Jia et al.'s tool and Rasoolzadeh's latency complexity can lead to lower-latency S-box implementations. For S-boxes used in LBlock, Piccolo, SKINNY-64, RECTANGLE, PRESENT and TWINE, which are popular targets in this research line, we find new implementations with lower latency. We conducted synthesis comparisons of the area and latency under multiple standard libraries, where our results consistently outperformed in terms of latency. For example, for LBlock-S0, our solution reduces latency by around 50.0% ∼73.8% compared to previous implementations in TSMC 90nm library with the latency-optimized synthesis option.
A Framework for Advanced Signature Notions
The beyond unforgeability features formalize additional security properties for signature schemes. We develop a general framework of binding properties for signature schemes that encompasses existing beyond unforgeability features and reveals new notions. Furthermore, we give new results regarding various transforms: We show that the transform by Cremers et al. (SP'21) achieves all of our security notions and provide requirements such that this is also the case for the transform by Pornin and Stern (ACNS'05). Finally, we connect our framework to unforgeability notions.
Zero-Trust Post-quantum Cryptography Implementation Using Category Theory
This paper blends post-quantum cryptography (PQC) and zero trust
architecture (ZTA) to secure the access for AI models, formalized through
the abstract mathematical lens of category theory. In this work, latticebased
PQC primitives are assigned ZTA components that include microsegmentation
and context-aware authentication, leading to a visual compositional
framework that describes cryptographic workflows as morphisms
and trust policies as functors, showing how category theory allows for
fine-grained policies and adaptive trust. This quantum-resistant algorithm
viewing perspective offers an ease for protection against adversarial
AI threats. The paper uses a concrete implementation to attest to the
effectiveness of the theoretical contribution, rendering it a crypto-agile
transition using categorical proofs for AI security .
Efficient Pairings Final Exponentiation Using Cyclotomic Cubing for Odd Embedding Degrees Curves
Uncategorized
Uncategorized
In pairings-based cryptographic applications, final exponentiation with a large fixed exponent ensures distinct outputs for the Tate pairing and its derivatives. Despite notable advancements in optimizing elliptic curves with even embedding degrees, improvements for those with odd embedding degrees, particularly those divisible by \(3\), remain underexplored. This paper introduces three methods for applying cyclotomic cubing in final exponentiation and enhancing computational efficiency. The first allows for the execution of one cyclotomic cubing based on the final exponentiation structure. The second leverages some existing seeds structure to enable the use of cyclotomic cubing and extends this strategy to generate new seeds. The third allows generating sparse ternary representation seeds to apply cyclotomic cubing as an alternative to squaring. These optimizations improve performance by up to $19.3\%$ when computing the final exponentiation for the optimal Ate pairing on $BLS15$ and $BLS27$, the target elliptic curves of this study.
Laurent Polynomial-Based Linear Transformations for Improved Functional Bootstrapping
Following Gentry's seminal work (STOC 2009), Fully Homomorphic Encryption (FHE) has made significant advancements and can even evaluate functions in the bootstrapping process, called functional bootstrapping. Recently, Liu and Wang (ASIACRYPT 2023) proposed a new approach to functional bootstrapping, which bootstrapped ciphertexts in 7ms amortized time. Their methods packed the secret key of the TFHE cryptosystem into a ciphertext of the BFV cryptosystem, followed by performing functional bootstrapping of TFHE within BFV. However, while this yields high amortized efficiency, it faces high latency and computational complexity of $\mathcal{O}(\sqrt{t})$ ciphertext-ciphertext multiplications due to use of large BFV plaintext primes that serve as the TFHE ciphertext modulus, $t = 65537$, to maximize SIMD slots.
In this work, we adapt their techniques to achieve lower latency functional bootstrapping by relaxing the requirement for prime BFV plaintext modulus to prime powers, $t = p^r$. We first introduce an improved linear transformation stage, multiplying Laurent Polynomial packed secret key and ciphertexts, $a_{ij}$ and $sk_j$, evaluating a $\mathbb{Z}_{p^r}$ linear map. With this, we reduce the number of operations needed to evaluate the linear phase of bootstrapping. Finally, we generalize their functional bootstrapping procedure from plaintext space $\mathbb{Z}_t$ to $\mathbb{Z}_{p^r}$ via leveraging the digit extraction algorithm, achieving a theoretical complexity of $\mathcal{O}(r^2\sqrt{p})$ ciphertext-ciphertext multiplications. Additionally, we enable a multi-valued bootstrapping scheme that permits the evaluation of multiple functions over a shared input. To the best of our knowledge, this is the first demonstration of such a method for TFHE ciphertexts that relies predominantly on BFV-based techniques.
In our experiments, we achieve overall runtimes as low as 49.873s, representing latency reductions of at least $26\times$, while noting a $19\times$ slowdown in amortized performance.
LEAF: A Low-Latency Evaluation Architecture for Feedforward Block in Privacy-Preserving Transformer Inference
Fully homomorphic encryption (FHE) is an appealing and promising solution for privacy-preserving transformer inference to protect users' privacy. However, the huge computational overhead makes it unrealistic to apply FHE in real-world transformers for large language models (LLM). Current FHE-based approaches to secure transformer inference face significant performance challenges, with total latency exceeding 5 hours for 32-input batches.
The feedforward block, comprising a large-scale matrix multiplication followed by a GELU evaluation, is widely recognized as one of the most computationally intensive components in privacy-preserving transformer inference. In the state-of-the-art system NEXUS, evaluating the feedforward block incurs a total latency of 5,378 seconds, processing up to 32 inputs per batch.
We aim to reduce the latency and propose LEAF, a low-latency evaluation architecture for the feedforward block. LEAF introduces a novel combination of fast matrix multiplication and an asymptotically efficient algorithm for computing non-polynomial activations. When evaluated on the BERT-base model, LEAF reduces total latency to 53.4 seconds, offering a $100\times$ speedup over the state-of-the-art method in the same environment. Our implementations are available.
Towards Better Integral Distinguishers over $\mathbb{F}_{p}$ Based on Exact Coefficients of Monomials
Symmetric primitives used in multi-party computation, fully homomorphic encryption, and zero-knowledge proofs are often defined over Finite Field $\mathbb{F}_{q}$ with $q=2^t$ or an odd prime $p$. Integral attack is one of the most effective methods against such primitives due to the common use of low-degree non-linear layers. This in turn highlights the importance of a deeper understanding of degree growth. For ciphers defined over $\mathbb{F}_{2^t}$, numerous works have explored the growth of the algebraic degree. However, these methods cannot be directly applied to $\mathbb{F}_{p}$. At CRYPTO 2020, Beyne et al. extended the integral cryptanalysis to $\mathbb{F}_{p}$ by comparing degree with $s(p-1)$ when using $p^s$ data. However, given that the precise degree evaluation remains fundamentally challenging and often computationally infeasible, one may lose better integral distinguishers.
In this paper, we present the first automatic search model over $\mathbb{F}_{p}$ based on the exact coefficient $\mathcal{A}$ of the monomial $\prod_{w=1}^{s}x_w^{p-1}$ contained in the algebraic representation. This model is constructed following the Computation-Traceback-Determine framework, where $\mathcal{A}$ is represented by several sums of multinomial coefficients under specific conditions. The existence of integral properties is then transformed into a determination of whether these sums can consistently equal $0\bmod{p}$. This determination is facilitated by four newly developed propositions based on Lucas Theorem. To demonstrate the effectiveness of our framework, we apply it to all variants of GMiMC. As a result, we achieve the best integral distinguishers for GMiMC-erf/-crf using large primes when they are used as block ciphers. For GMiMC-nyb/-mrf using 32/64-bit primes, our integral distinguishers cover more rounds than all other attacks. Meanwhile, all distinguishers we identified are no worse than those trivial ones predicted only considering the maximal degree. This shows the necessity of considering exact coefficients when searching for integral distinguishers over $\mathbb{F}_p$. Our framework is further employed to assess the security of two HADES designs: HadesMiMC and Poseidon2$^\pi$. The results reveal that the full rounds at the beginning and end of HADES provide sufficient resistance against integral cryptanalysis.
Poseidon and Neptune: Gröbner Basis Cryptanalysis Exploiting Subspace Trails
At the current state of the art, algebraic attacks are the most efficient method for finding preimages and collisions for arithmetization-oriented hash functions, such as the closely related primitives Poseidon/Poseidon2 and Neptune. In this paper, we revisit Gröbner basis (GB) attacks that exploit subspace trails to linearize some partial rounds, considering both sponge and compression modes.
Starting from Poseidon's original security evaluation, we identified some inaccuracies in the model description that may lead to misestimated round requirements. Consequently, we reevaluate and improve the proposed attack strategy. We find that depending on the concrete instantiation, the original security analysis of Poseidon under- or overestimates the number of rounds needed for security. Moreover, we demonstrate that GB attacks leveraging subspace trails can outperform basic GB attacks for Poseidon/Poseidon2 and Neptune.
We propose a variant of the previous attack strategy that exploits a crucial difference between Poseidon/Poseidon2 and Neptune: while Poseidon's inverse round functions have a high degree, Neptune's inverse external rounds maintain the same degree as the forward rounds. Using this new model, we demonstrate that Neptune's security in compression mode cannot be reduced to its security against the Constrained-Input-Constrained-Output (CICO) problem. To the best of our knowledge, this is the first time a concrete example has been provided where finding preimages is easier than solving the corresponding CICO problem.
Our results emphasize the importance of considering the mode of operation in security analysis while confirming the overall security of Poseidon/Poseidon2 and Neptune against the presented algebraic attacks.
Tight Multi-User Security of CCM and Enhancement by Tag-Based Key Derivation Applied to GCM and CCM
$\textsf{GCM}$ and $\textsf{CCM}$ are block cipher (BC) based authenticated encryption modes. In multi-user (mu) security, a total number of BC invocations by all users $\sigma$ and the maximum number of BC invocations per user $\sigma_\mathsf{u}$ are crucial factors. For $\textsf{GCM}$, the tight mu-security bound has been identified as $\frac{\sigma_\mathsf{u} \sigma}{2^n} + \frac{u p + u^2}{2^k}$, where $k$ and $n$ are respectively the key and block sizes, $u$ is the number of users, $p$ is the number of offline queries.In contrast, the $\mathsf{CCM}$'s mu-security bound is still unclear. Two bounds of $\frac{u \sigma_\mathsf{u}^2}{2^n} + \frac{u p + u^2}{2^k}$ and $\frac{\sigma^2}{2^n} + \frac{u p + u \sigma}{2^k}$ have been derived by Luykx~et~al.~(Asiacrypt~2017) and Zhang~et~al.~(CCS~2024), respectively, but both are not tight and worse than the $\textsf{GCM}$'s bound. Moreover, methods to enhance mu security without disruptive changes in the scheme have been considered for $\textsf{GCM}$, namely nonce randomization ($\textsf{NR}$) to improve offline security and nonce-based key derivation ($\textsf{KD}$) to improve online security, but their applicability to $\textsf{CCM}$ has never been discussed. In this paper, we prove an improved mu-security bound of $\textsf{CCM}$, which is tight, and reaches the $\textsf{GCM}$'s bound. We then prove that $\textsf{NR}$ and $\textsf{KD}$ applied to $\textsf{CCM}$ result in the same bounds for the case to $\textsf{GCM}$. An important takeaway is that $\textsf{CCM}$ is now proved to be as secure as $\textsf{GCM}$. Moreover, we argue that $\textsf{NR}$ and $\textsf{KD}$ can be insufficient for some applications with massive data, and propose a new enhancement method called nonce-based and tag-based key derivation ($\textsf{NTKD}$) that is applied to $\textsf{GCM}$ and $\textsf{CCM}$. We prove that the resulting schemes meet such real-world needs.
A Provably Secure W-OTS$^+$ based on MQ Problem
In 2022, Antonov showed that SHA-256 does not satisfy some secure property that SPHINCS$^+$ needs, and a fogery attack based on this observation reduces the concrete classical security by approximately 40 bits of security. This illustrates a more general concern: the provable security of some hash-based signature schemes can be compromised when implemented with certain real-world hash functions, and motivates the need to design new functions with rigorous, provable security guarantees. Besides, it has been shown that from W-OTS to W-OTS$^+$, the security requirement for the hash function's collision resistance can be relaxed to second-preimage resistance (SPR), which means that it is possible to use some functions with SPR property to instantiate the underlying function family $\mathcal{F}_n$ in W-OTS$^+$, and obtain a provably secure W-OTS$^+$.
In this paper, we use multivariate quadratic functions (MQ functions) to instantiate $\mathcal{F}_n$ in W-OTS$^+$, which yields the first provably secure W-OTS$^+.$ To prove its security, we need to derive the SPR property of MQ functions. The key is to show the $\mathbf{NP}$-hardness of finding second preimages.
Furthermore, we prove the multi-function, multi-target one-wayness (MM-OW) and the multi-function, multi-target second-preimage resistance (MM-SPR) of MQ functions, which implies the provable security of MQ-based W-OTS$^+$ in the multi-user setting, on the condition that the number of users is $O(n^{1-\epsilon})$ for some $\epsilon\in (0,1)$, where $n$ is the security parameter.
Enhancing Provable Security and Efficiency of Permutation-based DRBGs
We revisit the security analysis of the permutation-based deterministic random bit generator~(DRBG) discussed by Coretti et al. at CRYPTO 2019. Specifically, we prove that their construction, based on the sponge construction, and hence called Sponge-DRBG in this paper, is secure up to $O\left(\min \left\{2^{\frac{c}{2}}, 2^{\frac{\lambda}{2}}\right\}\right)$ queries in the seedless robustness model, where $\lambda$ is the required min-entropy and $c$ is the sponge capacity. This significantly improves the provable security bound from the existing $O\left(\min \left\{2^{\frac{c}{3}}, 2^{\frac{\lambda}{2}}\right\}\right)$ to the birthday bound. We also show that our bound is tight by giving matching attacks.
As the Multi-Extraction game-based reduction proposed by Chung et al. at Asiacrypt 2024 is not applicable to Sponge-DRBG in a straightforward manner, we further refine and generalize the proof technique so that it can be applied to a broader class of DRBGs to improve their provable security.
We also propose a new permutation-based DRBG, dubbed POSDRBG, with almost the optimal output rate $1$, outperforming the output rate $\frac{r}{n}$ of Sponge-DRBG, where $n$ is the output size of the underlying permutation and $r=n-c$. We prove that POSDRBG is tightly secure up to $O\left(\min \left\{2^{\frac{c}{2}}, 2^{\frac{\lambda}{2}}\right\}\right)$ queries. Thus, to the best of our knowledge, POSDRBG is the first permutation-based DRBG that achieves the optimal output rate of 1, while maintaining the same level of provable security as Sponge-DRBG in the seedless robustness model.
Breaking Poseidon Challenges with Graeffe Transforms and Complexity Analysis by FFT Lower Bounds
Poseidon and Poseidon2 are cryptographic hash functions designed for efficient zero-knowledge proof protocols and have been widely adopted in Ethereum applications. To encourage security research, the Ethereum Foundation announced a bounty program in November 2024 for breaking the Poseidon challenges, i.e. solving the CICO (Constrained Input, Constrained Output) problems for round-reduced Poseidon constructions. In this paper, we explain how to apply the Graeffe transform to univariate polynomial solving, enabling efficient interpolation attacks against Poseidon. We will provide an open-source code and details our approach for solving several challenges valued at $20000 in total. Compared to existing attacks, we improves 2^{13} and 2^{4.5} times in wall time and memory usage, respectively. For all challenges we solved, the cost of memory access turns out to be an essential barrier, which makes the security margin much larger than expected. We actually prove that the memory access cost for FFT grows as the 4/3-power of the input size, up to a logarithmic factor. This indicates the commonly used pseudo linear estimate may be overly conservative. This is very different from multivariate equation solving whose main bottleneck is linear algebra over finite fields. Thus, it might be preferable to choose parameters such that the best known attack is interpolation, as it presents more inherent hardness.
Almost-Total Puzzles and Their Applications
Public-coin protocols are cryptographic protocols in which all messages sent by a specific party (typically the receiver or verifier) consist solely of random bits. These protocols have been extensively studied $\textit{in the classical setting}$ due to their advantageous properties in several scenarios, such as the parallel repetition of interactive arguments, and the design of secure multi-party computation with low round complexity, among others. Curiously, $\textit{post-quantum}$ constructions of public-coin protocols remain limited, particularly when optimization is sought in additional factors like round complexity or hardness assumptions.
We introduce the concept of $\textit{almost-total puzzles}$, a novel cryptographic primitive characterized by two key properties: (i) hardness against any efficient adversary, and (ii) an "almost-total" guarantee of the existence of solutions, even when the puzzle generator is malicious. We demonstrate that this primitive can be derived from one-way functions in public-coin, requiring only two rounds. By leveraging this primitive, we obtain a family of new $\textit{public-coin}$ results in both the classical and post-quantum settings, based on the $\textit{minimal assumption} $ of (post-quantum) one-way functions, including:
- five-round post-quantum extractable commitments and witness-indistinguishable arguments of knowledge, where the (knowledge) extractors achieve the $\textit{coherently expected quantum-polynomial-time}$
($\mathsf{EQPT}_c$) simulation proposed by Lombardi, Ma, and Spooner [FOCS'22];
- five-round classical extractable commitments that $\textit{do not suffer from over extraction}$;
- five-round classical delayed-input strong witness-indistinguishable arguments of knowledge, and delayed-input witness-hiding arguments of knowledge;
- the five-round post-quantum analogue of the last item, but with the difference that (1) the input can be delayed until the third round, and (2) post-quantum arguments of knowledge are again defined w.r.t. $\mathsf{EQPT}_c$-simulation;
- $O(\log^* \lambda)$-round post-quantum non-malleable commitments.
Resolving the Efficiency-Utility Dilemma of Threshold Linearly Homomorphic Encryption via Message-Space Adapter
Threshold linearly homomorphic encryption (ThLHE) is a useful cryptographic tool for secure computation in multi-party settings, with applications in electronic voting, secure multiparty computation (MPC), and beyond. Although ThLHE offers significant advantages such as low communication overhead, its adoption in modern systems is hindered by a critical dilemma between efficiency and utility. Precisely, existing ThLHE schemes either suffer from high decryption complexity—typically $\mathcal{O}(N^2\log N)$ or worse for $N$ parties—or impose extra restrictions on the message space or participant set, limiting their practicality in large-scale and dynamic settings.
In this work, we resolve this efficiency-utility dilemma for ThLHE by introducing a novel primitive termed message-space adapter (MeSA). By developing a lattice-based MeSA for exponential ElGamal (Exp-ElGamal), we mitigate the small-message restriction of Exp-ElGamal while preserving its efficient threshold decryption. This leads to the design of the first ThLHE scheme that achieves quasi-linear decryption complexity without restrictions on the message space or participant set. We implement a prototype of this new ThLHE scheme and validate the quasi-linear growth of its running time with respect to $N$.
Beyond resolving this dilemma, we further extend the applications of our new ThLHE scheme. Specifically, we apply it to construct the first multi-party fully homomorphic encryption scheme with quasi-linear computation complexity and constant communication complexity, while supporting arbitrary threshold and dynamic participant set. This demonstrates the extra benefits of our ThLHE scheme with broader applicability.
Quantum Rewinding for IOP-Based Succinct Arguments
We analyze the post-quantum security of succinct interactive arguments constructed from interactive oracle proofs (IOPs) and vector commitment schemes. Specifically, we prove that an interactive variant of the *BCS transformation* is secure in the standard model against quantum adversaries when the vector commitment scheme is collapse binding.
Prior work established the post-quantum security of Kilian's succinct interactive argument, a special case of the BCS transformation for one-message IOPs (i.e., PCPs). That analysis is inherently limited to one message because the reduction, like all prior quantum rewinding reductions, aims to extract classical information (a PCP string) from the quantum argument adversary. Our reduction overcomes this limitation by instead extracting a *quantum algorithm* that implements an IOP adversary; representing such an adversary classically may in general require exponential complexity.
Along the way we define *collapse position binding*, which we propose as the ``correct'' definition of collapse binding for vector commitment schemes, eliminating shortcomings of prior definitions.
As an application of our results, we obtain post-quantum secure succinct arguments, in the standard model (no oracles), with the *best asymptotic complexity known*.
Logup*: faster, cheaper logup argument for small-table indexed lookups
Logup argument (in it's modern GKR version, as described in eprint:2023/1284 paper) is a logarithmic derivative-based unindexed lookup argument. An indexed lookup argument can be constructed from unindexed one using standard trick.
In this short informal note, we explain a different way of obtaining indexed lookup from logup, which does not commit any additional arrays of the size of the indexing array. That makes it particularly amenable for lookups in small tables (giving, to our knowledge, a first argument with this property).
Additionally, this argument is not subject to numerator overflow issue that requires additional mitigation described in eprint:2024/2067.
Improvements to SPARK / Lasso protocols are also discussed.
Quantum Security Analysis of the Key-Alternating Ciphers
In this work, we study the quantum security of key-alternating ciphers (KAC), a natural multi-round generalization of the Even–Mansour (EM) cipher underlying many block cipher constructions, including AES. While the classical security of KAC and the quantum security of the $1$-round KAC (i.e. Even-Mansour) cipher are well understood, the quantum resistance of multi-round KAC remains largely unexplored. We focus on the $2$-round KAC construction, defined using public $n$-bit permutations $P_1$, $P_2$ and keys $k_0$, $k_1$, and $k_2$ as $E(x) = P_2(P_1(x \oplus k_0) \oplus k_1) \oplus k_2.$ Our main contributions are as follows:
1. Quantum Lower Bounds. We provide the first formal analysis showing that a $2$-round KAC is quantum-secure in both the $Q1$ and $Q2$ models. Specifically, in the $Q1$ model, a (non-adaptive) adversary must make at least $2^{2n/5}$ quantum queries to the public permutations and at least $2^{2n/5}$ classical queries to the cipher in order to distinguish it from a random permutation (in contrast to the classical lower bound of $2^{2n/3}$ queries). As a corollary, we show that in the $Q2$ model, a (non-adaptive) adversary requires $2^{n/4}$ quantum queries. To achieve such a result, we employ the quantum hybrid method along with recently proposed lifting theorems in the ideal cipher and random permutation oracle model.
2. Quantum Key-Recovery Attack. We give the first nontrivial quantum key-recovery attack on multi-round KAC in the $Q1$ model where the adversary has quantum access to all of the public permutations. Our quantum attack applies to any $t$-round KAC and achieves quantum query complexity $O(2^{\alpha n})$, where $\alpha = \frac{t(t+1)}{(t+1)^2 + 1}$, improving over the best known classical bound of $O(2^{\alpha' n})$, where $\alpha' = \frac{t}{t+1}$, from Bogdanov et al. (EUROCRYPT 2012). The attack leverages a novel application of quantum walk algorithms specifically adapted to the KAC structure.
3. The $Q1^*$ Model. To bridge the classical and $Q1$ settings, we introduce the $Q1^*$, in which the adversary has quantum superposition access to at most one permutation. This model is crucial for our $Q1$ lower bound and supports similar key-recovery attacks to Q1, using fewer quantum resources. We believe $Q1^*$ is of independent interest.
Succinct Witness Encryption for Batch Languages and Applications
Witness encryption allows one to encrypt a message to an $\mathsf{NP}$ relation $\mathcal{R}$ and a statement $x$. The corresponding decryption key is any valid $\mathsf{NP}$ witness $w$. In a succinct witness encryption scheme, we require that the size of the ciphertext be sublinear in the size of the $\mathsf{NP}$ relation. Currently, all realizations of succinct witness encryption for $\mathsf{NP}$ rely on strong assumptions such as pseudorandom obfuscation, extractable witness encryption, or differing-inputs obfuscation. Notably, none of these notions are known from standard assumptions.
In this work, we consider a relaxation of succinct witness encryption for $\mathsf{NP}$ to the setting of batch $\mathsf{NP}$. In this setting, one encrypts to an $\mathsf{NP}$ relation $\mathcal{R}$ together with $K$ statements $x_1, \ldots, x_K$. In the basic version, one can decrypt if they have a witness $w_1, \ldots, w_K$ for all $K$ statements. The succinctness requirement is that the size of the ciphertext should be sublinear in the number of instances $K$, but is allowed to grow with the size of the $\mathsf{NP}$ relation (i.e., the size of a single instance). More generally, we can also impose a (monotone) policy $P \colon \{0,1\}^K \to \{0,1\}$ over the $K$ instances. In this case, decryption is possible only if there exists $w_1, \ldots, w_K$ such that $P(\mathcal{R}(x_1, w_1), \ldots, \mathcal{R}(x_K, w_K)) = 1$.
In this work, we initiate a systematic study of succinct witness encryption for batch languages. We start with two simple constructions that support CNF and DNF policies based on plain witness encryption in conjunction with a somewhere statistically sound batch argument for $\mathsf{NP}$ or a function-binding hash function. Then, using indistinguishability obfuscation, we show how to support policies that can be computed by read-once bounded-space Turing machines. The latter construction is in fact a unique witness map for monotone-policy batch $\mathsf{NP}$, and as such, also gives a SNARG for monotone-policy batch $\mathsf{NP}$ where the size of the common reference string is sublinear in the number of instances.
Finally, we demonstrate some immediate applications of succinct witness encryption for batch languages. We construct new succinct computational secret sharing schemes for CNFs, DNFs, and weighted threshold policies. We also show how to build distributed monotone-policy encryption, a notion that generalizes recent trustless primitives like distributed broadcast encryption and threshold encryption with silent setup.
On the Adaptive Security of Key-Unique Threshold Signatures
In this work, we investigate the security assumptions required to prove the adaptive security of threshold signatures. Adaptive security is a strong notion of security that allows an adversary to corrupt parties at any point during the execution of the protocol, and is of practical interest due to recent standardization efforts for threshold schemes. Towards this end, we give two different impossibility results.
We begin by formalizing the notion of a key-unique threshold signature scheme, where public keys have a unique correspondence to secret keys and there is an efficient algorithm for checking that public keys are well-formed. Key-uniqueness occurs in many threshold schemes that are compatible with standard, single-party signatures used in practice, such as BLS, ECDSA, and Schnorr signatures.
Our first impossibility result demonstrates that it is impossible to prove the adaptive security of any key-unique threshold signature scheme under any non-interactive computational assumption for a broad class of reductions, in the range $⌊t/2⌋< t_c ≤ t$, where $n$ is the total number of parties, $t_c$ is the number of corrupted parties, and $t+ 1$ is the threshold. We begin by ruling out full adaptive security (i.e., $t_c = t$ corruptions) for key-unique threshold signatures under non-interactive computational assumptions, including, but not limited to, the discrete logarithm (DL), computational Diffie-Hellman (CDH), and q-Strong Diffie-Hellman (q-SDH) assumptions. We then generalize this impossibility result for all $t_c$ such that $⌊t/2⌋< t_c ≤ t$.
Our second impossibility result applies specifically to key-unique threshold Schnorr signatures, currently an active area of research. We demonstrate that, even under the interactive computational assumptions One-More Discrete Logarithm (OMDL) and Algebraic OMDL (AOMDL), it is impossible to prove adaptive security for $⌊t/2⌋< t_c ≤ t$ in the programmable ROM with rewinding.
Taken together, our results underscore the difficulty of achieving adaptive security for key-unique threshold signatures. However, we believe this work can open a new line of research, by indicating assumptions and properties to aim for when constructing adaptively secure threshold schemes.
On the (in)security of Proofs-of-Space based Longest-Chain Blockchains
The Nakamoto consensus protocol underlying the Bitcoin blockchain uses proof of work as a voting mechanism. Honest miners who contribute hashing power towards securing the chain try to extend the longest chain they are aware of. Despite its simplicity, Nakamoto consensus achieves meaningful security guarantees assuming that at any point in time, a majority of the hashing power is controlled by honest parties. This also holds under ``resource variability'', i.e., if the total hashing power varies greatly over time.
Proofs of space (PoSpace) have been suggested as a more sustainable replacement for proofs of work. Unfortunately, no construction of a ``longest-chain'' blockchain based on PoSpace, that is secure under dynamic availability, is known. In this work, we prove that without additional assumptions no such protocol exists. We exactly quantify this impossibility result by proving a bound on the length of the fork required for double spending as a function of the adversarial capabilities. This bound holds for any chain selection rule, and we also show a chain selection rule (albeit a very strange one) that almost matches this bound.
Concretely, we consider a security game in which the honest parties at any point control $\phi>1$ times more space than the adversary. The adversary can change the honest space by a factor $1\pm \varepsilon$ with every block (dynamic availability), and ``replotting'' the space (which allows answering two challenges using the same space) takes as much time as $\rho$ blocks.
We prove that no matter what chain selection rule is used, in this game the adversary can create a fork of length $\phi^2\cdot \rho / \varepsilon$ that will be picked as the winner by the chain selection rule.
We also provide an upper bound that matches the lower bound up to a factor $\phi$. There exists a chain selection rule (albeit a very strange one) which in the above game requires forks of length at least $\phi\cdot \rho / \varepsilon$.
Our results show the necessity of additional assumptions to create a secure PoSpace based longest-chain blockchain. The Chia network in addition to PoSpace uses a verifiable delay function.
Our bounds show that an additional primitive like that is necessary.
Proof of Exponentiation: Enhanced Prover Efficiency for Algebraic Statements
Recent years have seen the widespread adoption of zkSNARKs constructed over small fields, including but not limited to, the Goldilocks field, small Mersenne prime fields, and tower of binary fields. Their appeal stems primarily from their efficacy in proving computations with small bit widths, which facilitates efficient proving of general computations and offers significant advantages, notably yielding remarkably fast proving efficiency for tasks such as proof of knowledge of hash preimages. Nevertheless, employing these SNARKs to prove algebraic statements (e.g., RSA, ECDSA signature verification) presents efficiency challenges.
To address this problem, we first define a new circuit model: arithmetic circuits with additional \textit{exponentiation gates}. These gates serve as fundamental building blocks for establishing more intricate algebraic relations. Then we present a \textit{Hash-committed Commit-and-Prove (HCP)} framework to construct Non-interactive Zero-knowledge (NIZK) proofs for the satisfiability of these circuits. Specifically, when proving knowledge of group exponentiations in discrete logarithm hard groups and RSA groups, compared to verifying complex group exponentiations within SNARK circuits, our approach requires proving only more lightweight computations within the SNARK, such as zk-friendly hash functions (e.g., Poseidon hash function). The number of these lightweight computations depends solely on the security parameter. This differentiation leads to substantial speedups for the prover relative to direct SNARK methods, while maintaining competitive proof size and verification cost.
Special Genera of Hermitian Lattices and Applications to HAWK
In its decisional form, the module-Lattice Isomorphism Problem (decision module-LIP) has received first attention in a paper by Ling, Liu and Mendelsohn. The authors gave a polynomial-time algorithm to distinguish spinor genera within the genus of a quadratic binary $\mathcal{O}_F$-lattice, assuming that $\mathcal{O}_F$ is a principal ideal domain. However, this algorithm would not impact cryptographic schemes based on decision module-LIP for lattices such as those employed in HAWK, i.e., for binary $\mathcal{O}_K$-lattices equipped with an Hermitian form (with $K$ a cyclotomic number field). Motivated by HAWK's framework, we investigate a concept that serves as an analogue of the spinor genus for Hermitian lattices, called special genus. This notion was studied by Shimura who provided a complete set of invariants for describing special genera. Building on this result, we propose an algorithm to determine whether two Hermitian lattices belong to the same special genus. Specifically for HAWK's lattice and sibblings, our algorithm runs in classical polynomial-time. Nevertheless we provide numerical evidence suggesting that the ability to distinguish special genera does not, in practice, constitute a significative advantage for solving decision module-LIP.
On the security of one certificateless aggregate signature scheme with dynamic revocation in vehicular ad-hoc networks
We show that the certificateless signature scheme [Veh. Commun. 47: 100763 (2024)] is insecure, because an adversary can launch forgery attack for any message. The signer's certificateless public key is not tightly bound to the system public key. The inherent flaw results in that the adversary can find an efficient signing algorithm functionally equivalent to the valid signing algorithm. The findings in this note could be helpful for newcomers who are not familiar with the designing techniques for certificateless signatures.
PSYLOCKE: Provably Secure Logic Locking with Practical Efficiency
Logic locking is an obfuscation technique designed to protect the intellectual property of hardware designs and has attracted considerable attention for over a decade. However, most logic locking schemes have been developed heuristically, leading the field into a cat-and-mouse game between attackers and defenders. Indeed, several proposed schemes have already been broken. While recent works have introduced provably secure logic locking, they often incur impractical overhead or fail to support the ASIC design paradigm while offering strong theoretical security guarantees.
In this work, we propose PSYLOCKE, a provably secure and practically efficient logic locking scheme that balances formal security guarantees with implementation feasibility. We introduce a new security paradigm that formalizes logic locking under predetermined allowable leakage, such as circuit topology, and we provide refined definitions of resilience against specific attacks. Our analysis bridges general security definitions and attack resilience, quantifying how leakage impacts the success of real-world attacks. PSYLOCKE is provably secure under topology leakage and achieves significant efficiency improvement compared to existing provably secure logic locking schemes. Alongside our theoretical analysis, we demonstrate through quantitative evaluations using widely-used benchmark circuits that PSYLOCKE strikes a favorable balance between practical performance and security. Concretely, PSYLOCKE reduced the Area-Power-Delay overhead by an order of magnitude while achieving the same security level, compared to the existing provably secure logic locking scheme.
Attacking Poseidon via Graeffe-Based Root-Finding over NTT-Friendly Fields
This paper explores the algebraic structure of the Poseidon and Poseidon2 permutations
over NTT-friendly finite fields, with a focus on preimage recovery via root-finding
techniques. We introduce an algorithm for efficiently identifying single roots of high-degree
univariate polynomials that emerge from these constructions, based on the Graeffe transform
and the tangent Graeffe method. Our approach is evaluated on reduced-round bounty
instances of these permutations at various security levels, as proposed by the Ethereum
Foundation, demonstrating practical effectiveness. These results yield new insights into the
security of permutation-based cryptographic primitives instantiated over NTT-friendly prime
fields.
Justvengers: Batched VOLE ZK Disjunctions in $\mathcal{O}(R{+}B{+}C)$ Communication
Recent progress on zero-knowledge proofs (ZKPs) based on vector oblivious linear evaluation (VOLE) offers a promising paradigm for scaling ZKPs over extremely large statements. In particular, VOLE-based ZK is currently the best choice in terms of end-to-end execution time. However, VOLE-based ZK incurs high communication overhead — it usually scales linearly with the circuit size.
To mitigate this, existing literature considers VOLE-based ZK over structured statements. In this work, we focus on the batched disjunctive statement — $\mathcal{P}$ and $\mathcal{V}$ agree on $B$ fan-in $2$ circuits $\mathcal{C}_1, \ldots, \mathcal{C}_{B}$ over a field $\mathbb{F}$; each circuit is of size $C$ with $n_{\mathit{in}}$ inputs. $\mathcal{P}$'s goal is to demonstrate the knowledge of $R$ witnesses $(\mathit{id}_j \in [B]$, $\boldsymbol{w}_j \in \mathbb{F}^{n_{\mathit{in}}})$ for each $j \in [R]$ s.t. $\forall j \in [R], \mathcal{C}_{\mathit{id}_j}(\boldsymbol{w}_j) = 0$ where neither $\boldsymbol{w}_j$ nor $\mathit{id}_j$ is revealed. Batched disjunctive statements are effective, e.g., in emulating the CPU execution inside ZK. Note, the naïve solution results in a circuit of size $\mathcal{O}(RBC)$.
To prove such a statement using VOLE-based ZK, the prior state-of-the-art protocol $\mathsf{Antman}$ (Weng et al., CCS'22) incurred $\mathcal{O}(BC + R)$ communication by additionally relying on AHE, whereas $\mathsf{Batchman}$ (Yang et al., CCS'23) achieved $\mathcal{O}(RC + B)$ communication using only VOLE.
In this work, we combine these two protocols non-trivially and present a novel protocol $\mathsf{Justvengers}$ — targeting the batched disjunctive statement — that incurs only $\mathcal{O}(R + B + C)$ communication and $\mathcal{O}(BC + (B + C)R\log R)$ computation for prover, using AHE and VOLE.
Side-channel safe conditional moves and swaps
Constant-time implementations are a cornerstone of secure
cryptographic systems, particularly in the context of key exchange protocols and digital signature schemes. These implementations are designed to eliminate timing side-channel vulnerabilities by ensuring that the program’s execution time is independent of secret data. A fundamental building block for achieving constant-time behavior is the conditional move operation. Unlike traditional branching constructs (such as if statements), which may introduce data-dependent timing variations, conditional moves allow developers to write logic that behaves identically at the hardware level regardless of input values. As a result, they are widely used in cryptographic libraries and standards to ensure both functional correctness and resistance to timing attacks. In this work, we describe our efforts to implement elliptic curve cryptography with some immunity against certain power leakage side-channel attacks, using standard C and Rust code.
Diving Deep Into UC: Uncovering and Resolving Issues in Universal Composability
Introduced by Canetti in 2001, Universal Composability (UC) is a widely adopted security model that enables the specification and proof of security for a broad range of protocols, offering strong security guarantees.
At its core lies the universal composition theorem (UC theorem), which ensures that protocols proven secure within the framework remain secure even when deployed in real-world environments with multiple instances of them.
In this work, we present two key contributions. First, we identify several problems with the UC framework, in particular the UC Theorem. They include counterexamples, limitations that make it unusable for important classes of protocols, and weaknesses in its proof. These problems reveal flaws in nearly all the fundamental concepts of UC.
Secondly, we update the main concepts of UC to address these problems. Although these revisions are nontrivial, our updated definitions are intended to stay as closely aligned with the original model as possible, while providing greater simplicity overall. To ensure the validity of these updates, we present a proof of the updated UC theorem, which is more detailed and modular than the original.
Fast elliptic curve scalar multiplications in SN(T)ARK circuits
Proof systems of arbitrary computations have found many applications in recent years. However, the proving algorithm has a consequent complexity tied to the size of the computation being proved. Thus, proving large computations is quite inefficient. One of these large computations is the scalar multiplication over an elliptic curve. In this work, we provide new techniques for reducing the time corresponding to proving a scalar multiplication, using integer lattice reduction or a (half) extended Euclidean algorithm in a ring of integers. We investigate optimizations in the case of small (complex multiplication) discriminant curves, and its generalization for multi scalar multiplications as used in signature verification. We provide an optimized Golang implementation for different elliptic curves in different proof systems settings. The speed-up in proving time is between 22% and 53% compared to the previous state-of-the-art.
Integral cryptanalysis in characteristic $p$
Integral and ultrametric integral cryptanalysis are generalized to finite rings of prime characteristic $p$ that are isomorphic to a product of fields. This extends, for instance, the complete state of the art in integral cryptanalysis from $\mathbf{F}_2^n$ to $\mathbf{F}_q^n$, for all prime powers $q$. A compact representation of transition matrices, based on convex polyhedra, is introduced to ensure that the proposed methods are computationally efficient even for large $p$.
Automated tools are developed and applied to a few generic and several concrete primitives. The analysis shows that previous degree estimates for Feistel-GMiMC, HadesMiMC, AES-Prime, small-pSquare and mid-pSquare are overly optimistic. Furthermore, except for AES-Prime, these primitives do not meet their design criteria unless their number of rounds is substantially increased.
Multivalued Broadcast with Optimal Length
A multi-valued broadcast protocol allows a sender $P_s$ to broadcast an $\ell$-bit message $m$ to $n$ recipients.
For all relevant models, multi-valued broadcast protocols with asymptotically optimal communication complexity $\mathcal{O}(\ell n)+\mathrm{Poly}(n)$ have been published.
Despite their very low communication complexity, these protocols perform poorly in modern networks.
Even if the network allows all $n$ parties to send messages at the same time, the execution time of the protocols is proportional to $\ell n$ (instead of $\ell$).
Even if the network allows to use all bilateral channels at the same time, the execution time is still proportional to $\ell$ (instead of $\ell/n$).
We ask the natural question whether multi-valued broadcast protocols exist which take time proportional to $\ell$ if parties can simultaneously send messages, and even take time proportional to $\ell/n$ if the bilateral channels can be used simultaneously. We provide a complete characterization of multi-valued broadcast with a two-fold answer:
On the negative side, we prove that for $t<n$, multi-valued broadcast requires time proportional to $\ell n$, if parties can simultaneously send messages, respectively take time proportional to $\ell$ if bilateral channels can be used simultaneously.
On the positive side, we prove that for $t < (1-\epsilon)n$ (for any fixed $\epsilon$), multi-valued broadcast in time proportional to $\ell$ (when parties can send messages simultaneously), respectively proportional to $\ell/n$ (if bilateral channels can be used simultaneously) is possible. We provide such protocols both with cryptographic security as well as with statistical security.
SEEC: Memory Safety Meets Efficiency in Secure Two-Party Computation
Secure Multi-Party Computation (MPC) allows multiple parties to perform privacy-preserving computation on their secret data. MPC protocols based on secret sharing have high throughput which makes them well-suited for batch processing, where multiple instances are evaluated in parallel.
So far, practical implementations of secret sharing-based MPC protocols mainly focus on runtime and communication efficiency, so the memory overhead of protocol implementations is often overlooked. Established techniques to reduce the memory overhead for constant-round garbled circuit protocols cannot be directly applied to secret sharing-based protocols because they would increase the round complexity. Additionally, state-of-the-art implementations of secret sharing-based MPC protocols are implemented in C/C++ and may exhibit memory unsafety and memory leaks, which could lead to undefined behavior.
In this paper, we present SEEC: SEEC Executes Enormous Circuits, a framework for secret sharing-based MPC
with a novel approach to address memory efficiency and safety without compromising on runtime and communication efficiency. We realize SEEC in Rust, a language known for memory-safety at close-to-native speed. To reduce the memory footprint, we develop an in-memory representation for sub-circuits. Thus, we never inline sub-circuit calls during circuit evaluation, a common issue that blows up memory usage in MPC implementations.
We compare SEEC with the state-of-the-art secret sharing-based MPC frameworks ABY (NDSS'15), MP-SPDZ (CCS'20), and MOTION (TOPS'22) w.r.t. runtime, memory, and communication efficiency. Our results show that our reliable and memory-safe implementation has competitive or even better performance.
The DROP Protocol: Dispute Resolution via Observation in Public for Verifiable, In-Person Voting
Dispute resolution has been a significant challenge in verifiable election protocols since such protocols were first proposed more than forty years ago. This work explores the problem from a new perspective and offers strong dispute resolution for in-person voting by depending on observers.
It proposes a simple definition of dispute resolution as a property of a voting protocol---a definition that is independent of any other security goal. It also presents the DROP protocol, a verifiable, in-person voting protocol that runs in the presence of observers who will always reach a correct conclusion in the case of a dispute without ever being able to compromise privacy or facilitate coercion.
HAWK: Having Automorphisms Weakens Key
The search rank-2 module Lattice Isomorphism Problem (smLIP), over a cyclotomic ring of degree a power of two, can be reduced to an instance of the Lattice Isomorphism Problem (LIP) of at most half the rank if an adversary knows a nontrivial automorphism of the underlying integer lattice. Knowledge of such a nontrivial automorphism speeds up the key recovery attack on HAWK at least quadratically, which would halve the number of security bits.
Luo et al. (ASIACRYPT 2024) recently found an automorphism that breaks omSVP, the initial underlying hardness assumption of HAWK. The team of HAWK amended the definition of omSVP to include this so-called symplectic automorphism in their submission to the second round of NIST's standardization of additional signatures. This work provides confidence in the soundness of this updated definition, assuming smLIP is hard, since there are plausibly no more trivial automorphisms that allow winning the omSVP game easily.
Although this work does not affect the security of HAWK, it opens up a new attack avenue involving the automorphism group that may be theoretically interesting on its own.
Enhancing Meme Token Market Transparency: A Multi-Dimensional Entity-Linked Address Analysis for Liquidity Risk Evaluation
Meme tokens represent a distinctive asset class within the cryptocurrency ecosystem, characterized by high community engagement, significant market volatility, and heightened vulnerability to market manipulation. This paper introduces an innovative approach to assessing liquidity risk in meme token markets using entity-linked address identification techniques. We propose a multi-dimensional method integrating fund flow analysis, behavioral similarity, and anomalous transaction detection to identify related addresses. We develop a comprehensive set of liquidity risk indicators tailored for meme tokens, covering token distribution, trading activity, and liquidity metrics. Empirical analysis of tokens like BabyBonk, NMT, and BonkFork validates our approach, revealing significant disparities between apparent and actual liquidity in meme token markets. The findings of this study provide significant empirical evidence for market participants and regulatory authorities, laying a theoretical foundation for building a more transparent and robust meme token ecosystem.
Polocolo: A ZK-Friendly Hash Function Based on S-boxes Using Power Residues (Full Version)
Conventional hash functions are often inefficient in zero-knowledge proof settings, leading to design of several ZK-friendly hash functions. On the other hand, lookup arguments have recently been incorporated into zero-knowledge protocols, allowing for more efficient handling of ``ZK-unfriendly'' operations, and hence ZK-friendly hash functions based on lookup tables.
In this paper, we propose a new ZK-friendly hash function, dubbed $\mathsf{Polocolo}$, that employs an S-box constructed using power residues. Our approach reduces the numbers of gates required for table lookups, in particular, when combined with Plonk, allowing one to use such nonlinear layers over multiple rounds. We also propose a new MDS matrix for the linear layer of $\mathsf{Polocolo}$. In this way, $\mathsf{Polocolo}$ requires fewer Plonk gates compared to the state-of-the-art ZK-friendly hash functions. For example, when $t = 8$, $\mathsf{Polocolo}$ requires $21\%$ less Plonk gates compared to Anemoi, which is currently the most efficient ZK-friendly hash function, where $t$ denotes the size of the underlying permutation in blocks of $\mathbb F_p$. For $t = 3$, $\mathsf{Polocolo}$ requires $24\%$ less Plonk gates than Reinforced Concrete, which is one of the recent lookup-based ZK-friendly hash functions.
SCMAC and LOL2.0: An AEAD Design Framework and A New Version of LOL Stream Cipher Design Framework
In this paper, we introduce SCMAC, a general framework that transforms large-memory stream ciphers into AEAD schemes. It represents an intermediate design paradigm between Encrypt-then-MAC and dedicated single-pass AEAD, partially integrating encryption and authentication mechanisms while mitigating the risk of state leakage associated with immediate absorption and squeezing. Consequently, this approach harmonizes high performance with enhanced security. Additionally, we propose LOL2.0, an enhanced version of the blockwise stream cipher design framework LOL. This new framework improves security through modifications to the FSM update and output functions, and increases flexibility in constructing LFSR components. Based on SCMAC}$ and LOL2.0, we present two AEAD ciphers, LOL2.0-Mini and LOL2.0-Double, which support both stream cipher and AEAD modes. These ciphers are tailored to Beyond 5G/6G environments, offering 256-bit key length and resistance to known cryptanalysis methods, including differential, linear, and integral attacks. They also provide 128-bit security against forgery attacks in the nonce-respecting setting. Due to their compatibility with AES-NI and SIMD instructions, LOL2.0-Mini and LOL2.0-Double achieve software performance of 90 Gbps and 144 Gbps in stream cipher mode, respectively. In AEAD mode, they perform at 59 Gbps and 110 Gbps, significantly faster than their predecessor's Encrypt-then-MAC versions.
Card-Based Protocol Counting Connected Components of Graphs
Card-based cryptography is a research area for realizing cryptographic functionality, such as secure multiparty computation and zero-knowledge proofs, by using a deck of physical cards and/or other non-electrical tools. Motivated by zero-knowledge proofs for solutions in pencil puzzles, there is a direction of recent studies on card-based protocols to verify connectivity of a set of cells or edges on lattice-shaped boards. In this paper, we generalize the problem to counting connected components of subsets on any graph, and propose a card-based protocol for the problem.
SPECK: Signatures from Permutation Equivalence of Codes and Kernels
The ongoing search for secure post-quantum cryptographic primitives has led to the development of numerous novel digital signature schemes. In this paper we introduce $\mathsf{SPECK}$, a new signature protocol based on the similarities between the Permuted Kernel Problem ($\mathsf{PKP}$) and the Permutation Code Equivalence Problem ($\mathsf{PEP}$). At its core, $\mathsf{SPECK}$ is built on the permutation version of LESS, but introduces a key modification to the commitment step. Indeed, instead of committing to an entire permuted code, the prover commits to a random relaxed $\mathsf{PKP}$ (that we call $\mathsf{PECK}$, Permutation Equivalence of Codes and Kernel) instance by randomly choosing a codeword from a random permutation of the initial code. In this sense, the secret key is used as a trapdoor to solve the committed $\mathsf{PECK}$ instance. The new approach allows for a faster verification that does not involve gaussian elimination, while maintains roughly the same signature size as LESS. We present the $\mathsf{Speck}$ protocol in detail and provide a deep analysis of the security of the new introduced assumptions.
$\mathsf{HyperWolf}$: Efficient Polynomial Commitment Schemes from Lattices
This work introduces $\mathsf{HyperWolf}$, a Hypercube-Wise Optimized polynomial commitment scheme based on Lattices over ring Fields.
The scheme achieves succinctness with $O(\log N)$ proof size and verifier time, along with linear prover cost. It supports both univariate and multilinear polynomials under a unified framework.
Inspired by the two-dimensional tensor structure employed in \cite{golovnev2021brakedown} to achieve sublinear efficiency, we generalize the idea to a $k$-dimensional tensor (hypercube) structure and design a $k$-round recursive proof protocol, where each round performs a dimensionality reduction, which results in an overall efficiency of $O(kN^{1/k})$. By setting $k = \log N$, our scheme achieves near-optimal asymptotic performance.
$\mathsf{HyperWolf}$ is fully transparent and relies only on the standard lattice assumption over rings. In terms of concrete efficiency, for polynomials with $N = 2^{25}$ coefficients, our scheme yields proof sizes that are $8\times$ smaller than the lattice-based scheme of \cite{cini2024polynomial} (Crypto24), and over $200\times$ smaller than $\mathsf{SLAP}$ \cite{albrecht2024slap} (Eurocrypt24). Compared to $\mathsf{Greyhound}$\cite{nguyen2024greyhound} (Crypto24), our proof size is of similar magnitude, while achieving significantly lower verifier time.
Zero-knowledge Authenticator for Blockchain: Policy-private and Obliviously Updateable
Transaction details and participant identities on the blockchain are often publicly exposed. In this work, we posit that blockchain's transparency should not come at the cost of privacy. To that end, we introduce zero-knowledge authenticators (zkAt), a new cryptographic primitive for privacy-preserving authentication on public blockchains. zkAt utilizes zero-knowledge proofs to enable users to authenticate transactions, while keeping the underlying authentiction policies private.
Prior solutions for such {policy-private authentication} required the use of threshold signatures, which can only hide the threshold access structure itself. In comparison, zkAt provides privacy for arbitrarily complex authentication policies, and offers a richer interface even within the threshold access structure by, for instance, allowing for the combination of signatures under distinct signature schemes.
In order to construct zkAt, we design a compiler that transforms the popular Groth16 non-interactive zero knowledge (NIZK) proof system into a NIZK with equivocable verification keys, a property that we define in this work. Then, for any zkAt constructed using proof systems with this new property, we show that all public information must be independent of the policy, thereby achieving policy-privacy.
Next, we give an extension of zkAt, called zkAt+ wherein, assuming a trusted authority, policies can be updated obliviously in the sense that a third-party learns no new information when a policy is updated by the policy issuer. We also give a theoretical construction for zkAt+ using recursive NIZKs, and explore the integration of zkAt into modern blockchains. Finally, to evaluate their feasibility, we implement both our schemes for a specific threshold access structure. Our findings show that zkAt achieves comparable performance to traditional threshold signatures, while also attaining privacy for significantly more complex policies with very little overhead.
SQIsign2D$^2$: New SQIsign2D Variant by Leveraging Power Smooth Isogenies in Dimension One
In this paper, we propose SQIsign2D$^2$, a novel digital signature scheme within the SQIsign2D family. Unlike other SQIsign2D variants, SQIsign2D$^2$ employs the prime $p=CD-1$ as the field characteristic, where $D=2^{e_2}$, $C=3^{e_3}$ and $C\approx D\approx \sqrt{p}$. By leveraging accessible $C$-isogenies, SQIsign2D$^2$ significantly reduces the degree requirements for two-dimensional isogeny computations, thereby lowering the overall computational overhead compared to other SQIsign2D variants.
We also provide a proof-of-concept implementation of SQIsign2D$^2$, and give an efficiency comparison between SQIsign2D$^2$ and other SQIsign2D variants. In particular, the experimental results demonstrate that the key generation and signing phases of SQIsign2D$^2$ are more than twice as fast as those of SQIsign2D-East at the NIST-I security level, respectively. Additionally, the verification performance in SQIsign2D$^2$ exhibits marginally improved efficiency.
Rep3 Reloaded: On the Cost of Function-Dependent Preprocessing in Semi-Honest 3PC with Honest Majority
Rep3 denotes the implementation of semi-honest three-party computation with an honest majority in MP-SPDZ (CCS'20). It uses replicated secret sharing with one message per multiplication and party as proposed by Araki et al. (CCS'16). This approach is rivaled by Astra (CCSW'19) and Trio (PETS'25), which use function-dependent preprocessing. The latter is more involved than, e.g., Beaver triples which can be used as a commodity.
In this work, we present a full implementation of Astra and Trio in MP-SPDZ, and we evaluate the costs of the different approaches. We show the equivalence of the schemes, which implies that a protocol in any of the schemes can be translated to one in another with the same overall communication cost. We also present an improvement to two important building blocks for privacy-preserving computation, namely secure comparison and probabilistic truncation used in fixed-point arithmetic. To evaluate our implementation, we have benchmarked machine learning training and inference in all three schemes, improving on Keller and Sun (ICML'22) by over 30%. Our implementation also highlights the large storage requirements of function-dependent preprocessing as it runs the two phases separately. To the best of our knowledge, this is the first implementation to do so.
The Accidental Computer: Polynomial Commitments from Data Availability
In this paper, we show two simple variations of a data availability scheme which enable it to act as a multilinear polynomial commitment scheme over the data in a block. The first variation enables commitments over all of the block's data with zero prover overhead: the data availability construction simply serves both purposes. The second variation allows commitments over subsets of data with nonzero but still concretely small proving costs, since most work is already done during data encoding. This works especially well for blockchains with a high degree of data parallelism, as data-parallel computation is particularly amenable to efficient GKR proofs. Since, in GKR, opening the polynomial commitment contributes significantly to prover costs, our construction enables the prover to reuse work already done by the data availability scheme, reducing—or wholly removing—work associated with the polynomial commitment scheme.
Jagged Polynomial Commitments (or: How to Stack Multilinears)
Modern SNARK constructions, almost ubiquitously, rely on a polynomial commitment scheme (PCS) --- a method by which a prover can commit to a large polynomial $P$ and later provide evaluation proofs of the form "P(x)=y" to the verifier.
In the context of zkVMs (i.e., proof-systems for general-purpose RAM computations), the common design is to represent the computation trace as a sequence of tables, one per CPU instruction, and commit to the these tables, or even their individual columns, as separate polynomials. Committing separately to these polynomials has a large overhead in verification costs, especially in hash-based systems.
In this work we drastically reduce this cost via a new construction which we call the jagged PCS. This PCS enables the prover to commit to the entire computation trace as a single polynomial, but then allows for the verifier to emulate access to the individual table or column polynomials, so that the arithmetization can proceed in the usual manner. The jagged PCS may be thought of as a sparse PCS for a very particular form of sparsity - namely, a "jagged" matrix in which each column has a different height.
Our construction of the jagged PCS is highly performant in practice. In contrast to existing sparse PCS constructions for general sparse polynomials, the jagged PCS does not require the prover to commit to any additional oracles and the prover cost is dominated by 5 finite field multiplications per element in the trace area. Furthermore, we implement the verifier as an arithmetic circuit that depends only on the total trace area - thereby significantly reducing the problem of "combinatorial explosion" often encountered in zkVM recursion.
Automated Verification of Consistency in Zero-Knowledge Proof Circuits
Circuit languages like Circom and Gnark have become essential tools for programmable zero-knowledge cryptography, allowing developers to build privacy-preserving applications. These domain-specific languages (DSLs) encode both the computation to be verified (as a witness generator) and the corresponding arithmetic circuits, from which the prover and verifier can be automatically generated. However, for these programs to be correct, the witness generator and the arithmetic circuit need to be mutually consistent in a certain technical sense, and inconsistencies can result in security vulnerabilities. This paper formalizes the consistency requirement for circuit DSLs and proposes the first automated technique for verifying it. We evaluate the method on hundreds of real-world circuits, demonstrating its utility for both automated verification and uncovering errors that existing tools are unable to detect.
Improved differential cryptanalysis of SPEEDY
SPEEDY is a family of lightweight block ciphers designed by Leander et al. Several differential attacks have been reported on the SPEEDY variants. However, nearly all of these attacks are based on differential characteristics with probabilities that differ from their reported values. These discrepancies arise from incorrect calculations of the (key-averaged) probability, particularly in consecutive steps within one round without intermediate key addition. In this paper, we revisit all reported differential characteristics and accurately calculate their key-averaged probabilities using quasidifferential trails. We extend this to also estimate the fixed-key probability. Our analysis reveals several characteristics with zero or significantly altered probability, invalidating several proposed attacks. We further implement a search algorithm and find a 5.5-round differential distinguisher that can be used to mount a full-round key recovery attack with a data complexity of $2^{183}$ and a time complexity of $2^{185}$. The memory complexity varies: in the chosen-plaintext setting, it is $2^{156}$, whereas in the chosen-ciphertext setting, it is $2^{36}$.
Tweakable Permutation-based Luby-Rackoff Constructions
Liskov, Rivest, and Wagner, in their seminal work, formulated tweakable blockciphers and proposed two blockcipher-based design paradigms, LRW1 and LRW2, where the basic design strategy is to xor the masked tweak to the input and output of a blockcipher. The 2-round cascaded LRW2 and 4-round cascaded LRW1 have been proven to be secure up to $\mathcal{O}(2^{3n/4})$ queries, but $n$-bit optimal security still remains elusive for these designs. In their paper, Liskov also posed an open challenge of embedding the tweak directly in the blockcipher, and to address this, Goldenberg et al. introduced the tweakable Luby-Rackoff (LR) constructions. They showed that if the internal primitives are random functions, then for tweaks with $t$ blocks, the construction needs $t + 6$ rounds to be optimally $n$-bit CPA secure and $2t + 8$ rounds to be optimally $n$-bit CCA secure, where respectively $t$ and $2t$ rounds were required to process the tweaks. Since blockciphers can be designed much more efficiently than pseudorandom functions, in many practical applications the internal primitives of LR ciphers are instantiated as blockciphers, which however would lead to a birthday-bound factor, which is not ideal for say lightweight cryptography.
This paper addresses the following two key questions affirmatively: (1) Can Goldenberg et al.'s results be extended to LR constructions with random permutations as internal primitives without compromising the optimal $n$-bit security? (2) Can the number of rounds required for handling long tweaks be reduced?
We formally define TLR-compatible functions, for processing the tweak, which when composed with 4-rounds and 5-rounds of LR construction with random permutations as internal primitives gives us respectively $n$-bit CPA and CCA secure tweakable permutations. For the security analysis, we proved general Mirror Theory result for three permutations. We also propose instantiating TLR-compatible functions with one round LR where a permutation (resp, two AXU hash functions) is used to mask single-block tweaks (resp., variable-length tweaks), thus proposing the $n$-bit CPA (resp., CCA) secure tweakable permutation candidates, $\mathsf{TLRP5}$ and $\mathsf{TLRP5+}$ (resp., $\mathsf{TLRP7}$ and $\mathsf{TLRP7+}$), using $5$ (resp., $7$) LR rounds, which is a significant reduction from the tweak-length-dependent results of Goldenberg et al.
We further extend the implications of our analysis of permutation-based LR as follows:
(1) We show $n$-bit CPA (resp., CCA) security of $5$-rounds (resp. $7$-rounds) permutation-based LR construction, which is quite an improvement over the existing $2n/3$-bit security proved by Guo et al.
(2) We propose a new design paradigm, $\mathsf{DbHtF}$, for $n$-bit secure MACs, that involves hashing the message, then passing the diblock tag through four rounds of permutation-based LR and then truncating the left block.
- « Previous
- 1
- 2
- 3
- ...
- 245
- Next »