The Mathematics of Security


Information sent around the world is encrypted by a variety of encryption protocols, the most popular of which is RSA. But these safeguards on our information and data may not be as secure as is widely believed. Why, and how can they be made more secure?


Perhaps one of the most important aspects of society is communication. It’s what allows diverse populations from across the world to cooperate and socialize, and ideas and opinions to spread around the globe and make us a truly global community.

From writing on papyrus scrolls to sending messages instantly around the world, methods of communication have evolved rapidly over the years. And yet, a concern that has never ceased to exist is that of privacy. The need to keep important messages safe from prying eyes and ears resulted in the field of study now known as cryptography.

The word conjures to mind images of hackers locked in battle with cryptographers, attempting to ferret out secrets of international import. But cryptography has far older origins than one might think. The oldest use of codes can be traced to Egypt in around 1900 BCE, where non-standard hieroglyphs were found carved into stone. Since then, codes and ciphers grew progressively more complicated, from the Caesar Cipher employed by Julius Caesar (simply shift every letter of the alphabet to the left or right by a fixed number of letters) to the supposedly unbreakable Enigma employed by the Germans in WWII. The Allied efforts to crack Enigma, largely aided by the work of Alan Turing, marked the beginning of the era of computers – an era that would see cryptography mutating into a well-defined field of study.

It is in this field that Professor Santanu Sarkar, from the Department of Mathematics, works. His research considers encryption, as he says, “from the attacker’s point of view”. It concerns the use of mathematical constructs called lattices in attempts to break the RSA cryptosystem, one of the most common encryption methods in use today.

Before explaining his research, Professor Sarkar outlines the history of modern cryptography. The foundations of modern cryptography were laid in 1976, when Whitfield Diffie and Martin Hellman published a paper that would lay the foundation for a revolution in cryptography. This paper outlined a new concept – the key-exchange system. Till that point, cryptosystems used symmetric-key encryption. This meant that both the receiver and sender of information used a single, shared key (a term for a large number used in the encryption process) to encrypt plaintext (unencrypted information) and to decipher ciphertext (encrypted information). This necessitates the use of a secure channel for the key to be shared between the sender and the receiver. But the necessity of a secure channel in order to set up a secret key was a insurmountable chicken-egg problem in the real world.

Diffie and Hellman’s paper, on the other hand, posited that a shared key was not necessary. Instead, they proposed that three keys be used – one key known to both parties (traditionally called Alice and Bob in cryptographic lingo), and two others, known only to the respective party. Their method was such that encryption and decryption could be performed without having to share the secret keys.

From the foundations that they laid, a more secure system arose: public-key encryption, which used a public and a private key. The public key would be available to anyone who wanted to communicate securely with a system, while the private key would be known only to the system itself. Encryption would be carried out with the public key and decryption with the private key. It was also mandatory that the private key not be deducible from the public key, as that would compromise the system’s security. Since the private key didn’t need to be shared with anyone via a potentially insecure channel, public-key encryption was clearly a better choice.

A schematic explaining how public-key encryption works. Source: Wikimedia Commons
A schematic explaining how public-key encryption works.               Source: Wikimedia Commons

Although Diffie and Hellman were able to prove the feasibility of such an encryption system, they weren’t able to come up with a viable system themselves. That was accomplished two years later, by Ron Rivest, Adi Shamir and Leonard Adleman, researchers at MIT. That eponymous cryptosystem – RSA – has since become one of the strongest known encryption standards in the world.

It is now used mainly as part of hybrid encryption methods: data is encrypted using a symmetric-key system, and the shared key is then encrypted using RSA. This is largely because of the RSA algorithm’s computational inefficiency – encrypting the data itself using RSA would take a very long time.

At a basic level, the RSA algorithm is based on the premise that the product of two large prime numbers is very hard to factorize. Put into mathematical terms, consider 2 prime numbers, P and Q, and their product, N. Then, the integer solutions to the equation p(x, y) = N – xy are the factors of N. Trivial (irrelevant) solutions to this equation include (x=1, y=N) and (x=N, y=1). The important solutions, though, are (x=P, y=Q) and (x=Q, y=P). While a computer can solve this equation relatively fast when N is small, larger values of N result in runtimes that render decryption attempts infeasible. For example, it is theorized that a single 1024-bit value of N (i.e, N approx. = 21024) will take approximately 4000 years to factorize!

At this point, Professor Sarkar mentions a caveat. “Till today, the RSA encryption hasn’t been broken in a feasible amount of time by any algorithms that conventional computers can run. However, there is an algorithm, called Shor’s algorithm, that can be used to break RSA encryption using quantum computers.” But since quantum computers are still in nascent stages of development, the RSA algorithm is still considered to be a bastion of cryptography.

The mathematical framework above describes the simplest conceptualization of RSA. The algorithm actually uses the equation ed = 1 + k(N+s), where as before, N=PQ, and s=1-P-Q. e and N are known (and hence, are public keys) and d, k and s are unknown (and hence, private keys). This can be expressed as a polynomial p(x, y, z) = ex – 1 – y(N+z). As before, obtaining the non-trivial solutions of this polynomial is equivalent to breaking the RSA encryption.

Possible avenues of attack on the RSA algorithm were discovered soon after the algorithm was published. Most, though, were inefficient in terms of computing time required. One of the most preferred methods of attack was established by Don Coppersmith, an American mathematician. He postulated and proved a theorem which when used in conjunction with mathematical constructs called lattices, could break the RSA algorithm in polynomial time (This means that the running time is a polynomial function of the input size. It’s largely used to denote programs whose running times don’t blow up too fast). Fortunately for cryptographers around the world, the guarantee of success for such an attack was attached to certain conditions.

A lattice, rendered in Sage.
A lattice, rendered in Sage.

The encryption could be broken in a feasible time scale only if d, one of the private keys, was less than N0.292 – which implies that for a secure RSA design, d would have to be greater than N0.292. But Professor Sarkar prefers to think of it as an upper bound for the system to be vulnerable, rather than a lower bound for security. “I always look at the problem from the attacker’s point of view”, he says with a smile. “Hence, I think of values of d for which the system is insecure.” This bound was proven by two cryptographers, Dan Boneh and Glenn Durfee in 1999. For example, if N was a 1000-digit number, the concerned RSA system would be secure as long as d was a 292-or-more digit number. However, the mathematical community conjectured that for the RSA system to be truly secure, d would have to be greater than N0.5. Using the example from before, d would have to have more than 500 digits.

Any increase in the bounds on d would have two consequences. First, it would expose any RSA systems that used values of d below the new bound as insecure. Secondly, an increase in the value of d in any RSA system results in a significant increase in the time taken to decrypt it using Coppersmith’s theorem. Hence, improving the lower bounds on d contributes greatly to improving the security of RSA systems used across the world.

Professor Sarkar goes on to explain that he worked on a further variant of this RSA scheme. “N doesn’t have to be only a product of two prime numbers. It can instead be of the form N=Pr Q, where r is another integer. I worked on proving bounds on d when r=2.” Professor Sarkar’s work improved the bounds on d from d<N0.22 to d<N0.395. Talking about the implications of his work, he says, “The results published will prompt RSA system designers to revise their designs. Since there is a larger range of d over which RSA can be broken, systems will have to be designed with the new bounds in mind.”

I mention to Professor Sarkar that his work seems highly theoretical. He’s quick to point out that it does involve some simulations – he runs attacks on RSA-protected systems using a open-source software called Sage. This serves to validate his results. “My work, and in fact all work since Boneh and Durfee’s paper & Coppersmith’s work involve some implicit mathematical assumptions that no one’s formally proved. I need to run actual attacks in order to validate the bounds I derive.” But, he admits with a rueful grin, “It can get tedious at times. You just have to keep trying the problem from different angles. I also like what I do.”

When I ask him how he decided to venture into cryptography, he points to his alma mater, ISI Kolkata, as his inspiration. “ISI is well known for cryptography. Once I started working in this field, I saw problems of this type, and they interested me. I still work with my colleagues there, as well as with collaborators in China.”

Professor Sarkar is currently attempting to improve the bounds described above even further. He’s also working on problems related to another encryption algorithm, RC4, primarily employed in wireless networks.


 

SSProfessor Santanu Sarkar received his PhD degree in mathematics from the Indian Statistical Institute. He is currently an Assistant Professor at IIT Madras, India. Before that, he was a guest researcher at the National Institute of Standards and Technology. His main research interests include cryptology and number theory.

 

 


Nithin

Nithin Ramesan is a junior undergraduate in the department of Electrical Engineering. He likes to quiz, write and read Calvin & Hobbes. For bouquets or brickbats, he can be contacted at nithinseyon@gmail.com.

 

 


Nithin Ramesan

Nithin is a final year undergraduate student of Electrical Engineering. He likes reading, quizzing and writing, and most of all, Calvin and Hobbes.