In order to achieve reliability on weak mobile connections as well as speed when dealing with large files (such as photos, large videos and files up to 1,5 GB), MTProto uses an original approach. This document is intended to clarify certain details of our setup, as well as address some points that are difficult to understand at first glance.
Each plaintext message to be encrypted in MTProto always contains the following data to be checked upon decryption in order to make the system robust against known problems with the components:
Telegram's End-to-end encrypted Secret Chats are using an additional layer of encryption on top of the described above. See Secret Chats, End-to-End encryption for details.
While other ways of achieving the same cryptographic goals, undoubtedly, exist, we feel that the present solution is both robust and also sucсeeds at our secondary task of beating unencrypted messengers in terms of delivery time and stability.
We prefer to use well-known algorithms, created in the days when bandwidth and processing power were both a much rarer commodity. This has valuable side-effects for modern-day mobile development, provided one takes care of the known drawbacks.
The weakspots of such algorithms are also well-known, and have been exploited for decades. We use these algorithms in such a combination that, to our best knowledge, prevents any known attack from possibly succeeding. Although we’d be grateful to see any evidence of the contrary (so far absent) and update our system accordingly.
If you have any comments, we would be happy to hear them at email@example.com.
You are also welcome to join in our competition — we are offering $300,000 to the first person to break Telegram encryption. Check out the contest announcement.
All Telegram apps ensure that msg_key is equal to SHA-1 of the decrypted message. It is important that the plaintext always contains message length, server salt, session_id and other data not known to the attacker.
It is crucial that AES decryption keys depend both on msg_key, and on auth_key, known only to the parties involved in the exchange.
Our scheme is closer to MAC-and-Encrypt with an essential modification. Namely, the encryption key and iv are hash-dependent.
The resulting data-dependent variable key denies all common attacks.
See also: Why not Encrypt-then-MAC?
Using encrypt-then-MAC, e.g. involving GCM (Galois Counter Mode), would enable the receiving party to detect unauthorized or modified ciphertexts, thus eliminating the need to decrypt them in case of tampering — at the cost of performing an HMAC computation / verification for each message.
Provided that all MTProto clients ensure that SHA1(plaintext) = msg_key before accepting the message, and that the plaintext always contains message length, server salt, session_id and other data not known to the attacker, using encrypt-then-mac in these circumstances would essentially bring an extra layer of DDoS protection — under the assumption that decryption requires more significant computational resources than the HMAC computation.
We believe that AES encryption / decryption is comparable in speed with this additional HMAC computation on devices currently in use. We estimate that our network bandwidth will be exhausted long before AES decryption would take up a significant percentage of a CPU. And security checks performed on the client after decryption ensure that invalid or tampered with messages will not be accepted.
SHA-1 serves our purposes well enough for the time being. No collisions have been found so far — and even if any are found, this will not break the MTProto encryption scheme, due to the way SHA-1 is used in the protocol (see here and here).
Even if it were possible to create messages with a chosen SHA-1 (a far greater achievement than merely finding a collision), it would avail to nothing. This is due to the fact that SHA-1 of the plaintext is used, while MTProto passes the ciphertext encrypted with a key unknown to the attacker. Therefore, it is reasonably secure for the present day in our setup.
That said, SHA-1 is computed considerably faster than SHA-256 and other suitable and well-studied algorithms. This edge in speed is very important on older mobile devices, and especially so, since Telegram can send full-quality photos, as well as large videos and other files of up to 1,5 GB each. We stick with SHA-1 for the time being, reserving the option to switch to a different hash function in a future revision of the protocol, as the computational power of both potential adversaries and user devices grows over time.
Yes, we use IGE, but it is not broken in our implementation. The fact that we do not use IGE as MAC together with other properties of our system makes the known attacks on IGE irrelevant.
Adaptive attacks are even theoretically impossible in MTProto, because in order to be encrypted the message must be fully formed first, since the key is dependent on the message content. As for non-adaptive CPA, IGE is secure against them, as is CBC.
Various secrets (nonce, server_nonce, new_nonce) exchanged during key generation guarantee that the DH-key can only be obtained by the instance that initiated the exchange.
Notice that new_nonce is transferred explicitly only once, inside an RSA-encrypted message from the client to the server.
Keys for end-to-end encrypted secret chats are generated by a new instance of DH key exchange, so they are known only to the parties involved and not to the server. To establish the identities of these parties and to ensure that no MitM is in place, it is recommended to compare identicons, generated from hashes of the DH secret chat keys (key visualizations).
Telegram's Secret chats support Perfect Forward Secrecy, you can read more about it here.
By definition, the known-plaintext attack (KPA) is an attack model for cryptanalysis where the attacker has samples of both the plaintext, and its encrypted version (ciphertext).
AES IGE that is used in MTProto is robust against KPA attacks (see this, if you wonder how one can securely use IGE). On top of that, the plaintext in MTProto always contains server_salt and session id.
By definition, a chosen-plaintext attack (CPA) is an attack model for cryptanalysis which presumes that the attacker has the capability to choose arbitrary plaintexts to be encrypted and obtain the corresponding ciphertexts.
MTProto uses AES in IGE mode (see this, if you wonder how one can securely use IGE) that is secure against non-adaptive CPAs. IGE is known to be not secure against blockwise-adaptive CPA, but MTProto fixes this in the following manner:
Each plaintext message to be encrypted always contains the following to be checked upon decryption:
On top of this, in order to replace the plaintext, you would also need to use the right AES key and iv, both dependent on the auth_key. This makes MTProto robust against a CPA.
By definition, a chosen-ciphertext attack (CCA) is an attack model for cryptanalysis in which the cryptanalyst gathers information, at least in part, by choosing a ciphertext and obtaining its decryption under an unknown key. In the attack, an adversary has a chance to enter one or more known ciphertexts into the system and obtain the resulting plaintexts. From these pieces of information the adversary can attempt to recover the hidden secret key used for decryption.
Each time a message is decrypted in MTProto, a check is performed to see whether the msg_key is equal to the SHA-1 of the decrypted data. The plaintext (decrypted data) also always contains message length, server salt and sequence number. This negates known CCAs.
By definition, length extension attacks are a type of attack when certain types of hashes are misused as message authentication codes, allowing for inclusion of extra information.
A message in MTProto consists of an msg_key, essentially equal to SHA-1 of the plaintext with some additional parameters, followed by the ciphertext. The attacker cannot append extra bytes to the end and recompute the SHA-1, since the SHA-1 is computed from the plaintext, not the ciphertext, and the attacker has no way to obtain the ciphertext corresponding to the extra plaintext bytes she may want to add.
Apart from that, changing the msg_key would also change the AES decryption key for the message in a way unpredictable for the attacker, so even the original prefix would decrypt to garbage — which would be immediately detected since the app performs a security check to ensure the that SHA-1 of the plaintext matches the msg_key received.
Replay attacks are denied because each plaintext to be encrypted contains the server salt and the unique message id and sequence number.
Telegram has two modes of communication — ordinary chats using client-server encryption and Secret Chats using end-to-end encryption.
Client-Server communication is protected from MiTM-attacks during DH key generation by means of a server RSA public key embedded into client software. After that, if both clients trust the server software, the Secret Chats between them are protected by the server from MiTM attacks.
The interface offers a way of comparing Secret Chat keys for users who do not trust the server. Visualizations of the key are presented in the form of identicons (example here). By comparing key visualizations users can make sure no MITM attack had taken place.
A 128-bit fingerprint is used to create the key visualization. If there was a way to efficiently cycle through DH parameters, an active man-in-the-middle attacker could theoretically spoof the fingerprint.
Our calculations show that for this kind of attack to succeed in one month — even with all possible optimizations using cycle finding, etc. — the attacker would need modern equipment worth around a trillion dollars. And the attack would consume approximately 50 billion kW.h of electricity in the process, worth hundred millions of dollars. All of this for just one secret chat. Deployment of FPGA and ASICs might lower this estimate by an order of magnitude, but the attack would still remain infeasible.
Note that it would also take such a spoofed secret chat an entire month to be created, which is likely to be noticed by the victims.
Our detailed calculations are presented in this article.