Tuesday, May 4, 2010

Lecture 24: Secure Communications

Today's lecture focused primarily on the topic of Secure Communications. We started by discussing why hackers generally break into organizations from the outside: because they are seeking a challenge, they are seeking fame, monetary interests, or ideological reasons. However, most hackers are internal to the network they are trying to hack. We were informed of some basic network security threats, including interception, impostors, remotely logging in as the root user, and threats against content. We then were given some detail on a popular type of attack called a "Replay Attack", where an attacker intercepts a message, and then "replays" that message, potentially resending login information and gaining unauthorized access. A simple way to protect against this type of attack is to implement a time stamp into each message. We were also told of some popular denial of service attacks, including transmission failure, connection flooding, and distributed denial of service. VPNs are also a security concern, given the nature of what they do. We went over a few topics on how VPNs work. Also, quite a large portion of this lecture was spent discussing IPsec, which basically is a form of IP security which allows for secure transmission of information over IP networks. This is necessary because normal IP has no security. The lecture concluded with mentioning that network security is only one piece of the puzzle--many other areas of vulnerability should be addressed to achieve the coveted state of being "totally secure".

Monday, May 3, 2010

Overview Session

We will have an overview session on Wednesday, May 5 at 12pm.

Saturday, May 1, 2010

Lecture 18: Access Control

This lecture began with Nathan talking about Trojan horse and background insertion. He talked about what a Trojan horse was. It is a secret, undocumented routine embedded within a useful program. Some of the functions of a Trojan are to screen capture, steel data, and file modification. A Trojan horse can’t replicate itself. He then went on to background insertion. Background insertion is when you can bypass normal authentication, security, and access routines.

After Nathan presented we spent most of the class reviewing operating systems. We went over memory and address protection. This is when the so prevents programs from corruption other programs or data. Often the so can exploit hardware support for this protection. We then went over some protection techniques. For example fence register protects operating systems from user programs. Then there is tagged architecture. This is when each memory word has one or more extra bits that identify access rights to words. We then went over segmentation. Segmentation is when each program has multiple address spaces. Some advantages are users can share access to a segment with potentially different access rights, and users cannot access an unpermitted segment. We then went over paging. And how it is done and used in operating systems. Some advantages of paging are users cannot access an unpermitted page and users can share access to a page with potentially different access rights.

After the review we went over Access Control. We did not make it very far into this topic. We only covered 5 slides. We discussed what the three goals of access control are. These goals are to check every access, enforce least privilege, and verify acceptable use. We then talked about the issues with access control. Some issues are the list becomes too large if many shared objects are accessible to all users, another issue is multiple permissions. This is as far as we got on the slides for this day.

Wednesday, April 28, 2010

Lecture 23: Network Threats

Today's presentation by Troy covered the WEP (Wired-Equivalent Privacy) cryptosystem for wireless networks. The system used a flawed encryption method and revealed too much information in the packets it creates. Cracking the cryptosystem requires the use of thousands of snooped packets, but this can be collected in little time, and after the collection period, key cracking takes very litle time.


Today's lecture covered network security.

Networks and particularly internet-based networks are particularly vulnerable to attacks. Networks afford anonymity of attackers, plenty of points of attack, easier access, and potentially more security holes if computers with different security systems or OSes are part of the network. The protocol used in a network can also be a weak point if it has vulnerabilities.

There are many aspects to possible attackson a network. All attacks first need some sort of information gathering. Reconnaissance on the vulnerabilities of a network is generally easy. Port scans gather the ports which a computer is listening to. People involved in the network are generally good sources of information if a little social engineering is used.

Eavesdropping on a network tends to be simple. Wired connections can usually be wiretapped stealthily. Wireless connections are even easier to eavesdrop.

Impersonation attacks involve pretending to be some member of a network. This involves either obtaining a password or exploiting vulnerabilities in rights management systems.

Spoofing attacks involve a weaker form of impersonation, but is applicable to more than just users. For example, a phishing attack involves the attacker spoofing a website to look just like another website, with a convincing address to add to the illusion. Spoofing attacks also cover session hijacks and man-in-the-middle attacks.

Session hijacking is a spoofing attack where an attacker hijacks a TCP connection or HTTP session and inserts malicious data or obtains private information. Man-in-the-middle attacks are similar, but instead of masquerading as an endpoint, the attacker becomes an intermediate node, possibly masquerading as both endpoints at once. It may also allow the attacker to alter packets as they move around the network.

Attackers may also attack the topology of the network by poisoning DNS caches, creating "evil twins", or creating "black holes" which attract packets and drops them.

In some cases, the existence of communication alone can be an important information, and traffic flow analysis tries to detect these covert conversations.

Websites have their own sets of vulnerabilities, mostly involving the interaction with a server trying to generate web pages. Some exploits include modified state information, cross-site scripting, buffer overflows, etc. Some vulnerabilities involve the server asking the client to run a certain piece of code (for example, Java applets), and this can be exploited by attackers to harm the user.

(Distributed) Denial of Service attacks a network's availability. This can be done by disrupting physical connections, flooding connection attempts via SYN flooding, ping floods via smurf attacks, etc. Attackers may use botnets to perform these attacks, since they tend to require huge amounts of resources.


All of these attacks are doable with easy-to-get, convenient programs.

Lecture 21 : Trusted Operating System

Lecture # 21 on Monday, April 14th began with a presentation by Gabriel about Non-Malicious Program errors. These are the classic errors that have enabled many recent security breaches. He referred three types of Non-Malicious program errors: Buffer Overflows, Incomplete Mediation & Time-of-Check to Time-of-Use Errors. After mentioning importance of stack, basic ways how buffer can be overflowed were also explained with the help of basic little program. EBP is the address at the beginning of the stack. He explained stack overrun attack with the help of basic algorithm. He concluded the presentation after mentioning some methods to prevent buffer overflow, those are: Non-executable stack, Static Analysis, Dynamic runtime protection, & Use safer versions.
Then, Dr. Gunes gave us an overview of Lecture # 20 during which he talked about trusted operating system design and different security design principles. He highlighted an idea,” after designing a trusted OS how it will complicate the things further”. On one hand he stated an ordinary operating system functions & its security features however on other hand he mentioned a trusted operating system functions and its security features. Further, he mentioned, the kernel is a part of an OS that performs lowest level functions whereas the security kernel is responsible for enforcing security mechanisms for the entire OS. The kernel generally performs 6 functions and those are coverage, separation, unity, modifiability, compactness, and verifiability.
Then, Dr. Gunes moved on to lecture # 21 topics & started with the term reference monitor which is the portion of a security kernel that controls the accesses to objects, in short it acts as a gate keeper. Hardware, processes, primitive files, protected memory & inter-process communication are the system elements on which security enforcement depends. He remarked that a piece of hardware is harder to tamper with, compared to a software. Next, he explained a typical division into TCB & non-TCB sections with the help of a diagram & then he described four basic interactions which TCB monitors & those are: process activation, execution domain switching, memory protection & I/O operation.
Further, he described a combined security kernel/operating system architecture as well as separate security kernel architecture. Physical, temporal, cryptographic & logical are the four ways to separate one process from others. Then, he drew our attention to the concept of virtualization, virtual machine as well as the layered OS design with modules operating in different layers. There are 3 ways to assure that a model, design, & implementations are correct & those are: testing, verification, & validation.
Furthermore, Dr. Gunes moved on to a new chapter & he started with the term security policies. He noted that Military security policy is a hierarchical policy & he emphasized an idea of compartments & sensitivity levels also. He went through classification & clearance concepts. Finally, Dr. Gunes concluded the lecture after describing 4 different security models in brief & those are: Lattice model, Bell-La Padula model, Harrison Ruzzo Ullman model & Take Grant model.

Wednesday, April 14, 2010

Lab assignment on Trusted Computing

I have uploaded the second lab assignment. The deadline is Tuesday, Apr 27 at 11:00 am.

You may post questions or comments under this blog entry.

Tuesday, April 13, 2010

Lecture 20 Trusted Operating System

The lecture began today with a presentation by Spencer Dawson on Rainbow Tables and there use to crack passwords. A rainbow table is a lookup table that offers a time-memory tradeoff used in recovering the plain text password from a password hash. The table contains a hash for all possible inputs up to a character limit. These tables have both advantages and limitations. The advantages include the fact that they are built once, and used many times, they make looking for a password faster as the lookup becomes a table search problem, and they are perfect for cracking weak hashes. The limitations on these tables are that they are generated always in the worst case time complexity. They are very large, a table for 8 characters is 134.6 GB, and they become infeasible when passwords are salted.

After the presentation, we continued with a lecture on Trusted Operating Systems. The operating system is a complex system, that is very difficult design and this complexity added with the securities issues makes it a very difficult design problem. By following the path of listing requirements, designing, and then testing the creation of the system can be done. There are several security design principles. These principles include privileges, permissions, separation of privileges, and ease of use. The feature normally included in an ordinary OS include authentication of users, protection of memory, File I/O and location and access control to general objects. Security features in ordinary OS include enforcement of sharing, fair service, and protection of OS protection data. There are more features that are included in an trusted OS including, identifcation and authentication, mandatory access control, object reuse protection, trusted path, accountability and audit, and intrusion detection.

The kernel is part of the OS that performs lowest level functions and the security kernel is responsible for security mechanisms for the entire OS. The kernel is responsible for 6 functions coverage, separation, unity, modifiability, compactness, and verifiability.

Friday, April 9, 2010

Lecture 19: User Authentication (Apr 7)

Lecture 19 began with a presentation of the effects of quantum cryptography on the future of computer security. The discussion was twofold. First the use of photons for data to transfer allows a method of providing complete security against man in the middle attacks (BB84 Protocol). The second topic discussed was the use of quantum computers to defeat security measures that would be infeasible with modern systems (shores algorithm).

Finally Dr. Gunes continued the class with last week’s lecture on Access Control. This portion of the lecture reviewed the benefits and drawbacks of ACL, ACM, and capabilities methods of restricting access control.

This week’s topic, User Authentication, was about how electronic systems identify and authenticate users. Identifying users is a difficult task for machines. Machines can use a variety of features for identification including what a person knows, has, or even a person’s physical features. Each of which has its own disadvantages. We also discussed the additional difficulties of remote logins, and the practice of combining multiple authentication methods for enhanced security.

Wednesday, April 7, 2010

Lecture 17: Operating Systems Security (Mar 31)

This lecture consisted of two parts: the first of which was a presentation by Evander Jo on code obscurity; the latter half of the lecture was a lecture from Dr. Gunes on operating systems security.

In Jo's presentation, the idea (and issue) of security through obscurity was presented. Essentially, obscurity is similar to stenography but different in that does not necessarily aim to hide information within a message, but rather it aims to confuse the interpretation of a message. A highly used tactic from exploit developers is to obfuscate their code upon completion of an exploit for some arbitrary vulnerability. This will defer the analysis of their code (from security professionals) and therefore allow exploit developers breathing room with respect to the discovery and analysis time of their code. However, when trying to apply security through obscurity, the issue comes from the test of time. It is not desirable to place trust on a system's security when it is based off of obfuscation—because it is only a matter of time until someone correctly interprets the obfuscated code.

In the presentation on operating systems security, Dr. Gunes first provided a brief history and discussion on operating systems. Next, it was outline of what an operating system is exactly trying to protect. Such resources included memory and address protection in that different users should be able to access the same system without compromise or intervention from other users within that system. With that, several protection techniques were discussed to allow for sharing of resources but at the same time separation of these same resources. One such technique was the inclusion of base and bound addresses in which users were supplied a base address and top-level address in memory that only they had access to. However, the issue here is with the efficiency of partitioning. That is, some users may require more space than other users. Later on, other present-day operating system techniques were discussed such as segmentation and paging. The lecture concluded with a brief overview of the Intel x86 architecture.

Wednesday, March 31, 2010

Homework 3

3rd homework on Password Cracking is posted at http://www.cse.unr.edu/~mgunes/cs450/HW3.htm.

The deadline is Thursday, Apr 8 at 11:00 am.

Lecture 16: Targeted Malware (Mar 29)

Lecture #16 started with Alex Rudd's presentation which gave us an overview about a history of Digital rights management. Digital rights management is a technological way of limiting access to copyrighted material. Then Dr. Gunes, gave us a overview of Lecture # 15 during which he talked about what a virus is, how it propagates, what a worm is and also about the difference between the worm and a virus. He also talked about Rabbit/Bacteria and Logic/ Time Bomb, Trojan, Trap Door and a Dropper. The four phases of a virus- Dormant, Propagation, Triggering and Execution phase were also discussed. Different types of viruses and how they append to the programs were talked about. Different virus signatures that help identify the virus, like the storage pattern, execution pattern and Transmission pattern are mentioned. Polymorphic viruses and the approaches that Anti virus software’s take in protecting against viruses are mentioned. Also, the prevention of virus attacks and damage limiting is also talked about.

Then, Dr. Gunes moved on to Targeted Malware. Trapdoors, Salami attacks, rootkit programs, privilege escalation, interface illusion and keystroke logging, Timing attack were mentioned. Covert channels that secretly leak information and provide unauthorized access were taught. Two different kinds of Covert channels exist- Storage Channels and Timing channels. Storage channels pass information by using the presence or absence of an object. An example of storage covert channel is File lock. Timing channels pass information by considering the speed at which things happen. Covert channels can be identified by the presence of shared resources, correctness of program code and analyzing the flow of information also slowing down the rate at which the information is transferred. Different methods for controlling program threats were discussed. Operating systems controls on use of programs were mentioned.

Thursday, March 25, 2010

Lecture 15: Malicious Codes (Mar 24)

For Lecture 15, we began with Mike's presentation on Blue Pill malicious software. A Blue Pill attack is essentially malicious software code that runs in a virtualized environment, making it dangerous and difficult to detect. As virtualization becomes more popular, it is expected that such attacks will become much more common. The most vulnerable systems to these types of attack include the modern line of processors with built-in virtualization support. The defense against Blue Pill is called Red Pill, but it is not yet very reliable. This presentation was very interesting and contained a lot of great info.

After Mike’s presentation, Professor Gunes continued with lecture 15 on Malicious Codes. He began by discussing different kinds of malicious code, including virus, worm, rabbit/bacteria, logic/time bomb, Trojan horse, backdoor, and dropper. He noted that sometimes it is difficult to specify between different types of malicious codes. The lecture outlined why Trojans are hard to detect and showed that they are also the most popular type of malicious code. We were reminded that even if you create a legitimate trapdoor for yourself, someone else can find it. We were introduced to the “4 Virus Lifecycles,” which are the dormant phase, the propagation phase, the triggering phase, and the execution phase.

The lecture also included methods for preventing malicious code attacks. We learned about how viruses can be detected according to certain patterns, characteristics, and other signature traits of virus code. The easiest way to prevent a malicious code attack is to be sure that your trust the source of the files that you download.

Monday, March 8, 2010

Lecture 13: Program Security (Mar 8th)

Lecture 13 began with a presentation by Joshua about Encrypted Viruses. His presentation covered what encrypted viruses were and how they are used. Essentially, the presentation explained that an encrypted virus is one that the virus code is either encrypted so that it is not easily detected by the system, or is a virus that encrypts files on one’s computer so they cannot be accessed. His presentation shows that encryption can also be used for malicious reasons.

The lecture began with a continuation on the discussion of non-malicious security flaws. Dr. Gunes began by discussing string formatting vulnerabilities and how simple printf() functions, if not used properly can cause serious security issues. In addition, he continued with his explanation of Incomplete Mediate. This is where the programmer doesn’t specify exactly the correct data to be accepted from the user and can therefore allow the program to accept unreasonable values, poorly formatted entries, and allows the system to become susceptible to buffer overflow and malicious code injections. Furthermore, the lecture concluded with a discussion on TOCTTOU errors, otherwise known as “race conditions”. In this circumstance, lets say two processes of a program are using the same data in their code. Well the system will check to see if the first process is allowed to use the data, then lets it, and same for the second process. However if something changes in the time it takes for the system to check if the process is allowed to use the data, then many errors can occur. The lecture finished with an overview of what will be on the mid-term.

Lecture 12: Program Security (Mar 3rd)

Lecture 12 was split between a presentation from Jeff on trusted computing and program security. Jeff's presentation covered what trusting computing meant with regards to the internet. He also covered the basics of what a null attack was. The lecture on program security covered how to find and fix faults, types of security flaws, and buffer overflows. The section on finding and fixing faults suggested that the best way to find faults is to allow users to test the program and report faults they find. The types of security flaws mentioned were: malicious, non-malicious, and unintentional. Malicious flaws are created in order to attack a particular system. Non-malicious flaws are sometimes features that are intended to be in the program, but when used by a malicious person can cause problems for the program. Finally non-malicious flaws are errors that were not intended by the programs creates. The last topic covered was buffer overflows. Buffer overflows occur when the program gets an input that is longer than the input that it was expecting. When this happens you don't know if the program is going to overwrite code or data with the extra input.

Thursday, March 4, 2010

Homework 2

2nd homework on Cryptographic Systems and Program Security is posted at http://www.cse.unr.edu/~mgunes/cs450/HW2.htm.

The deadline is Friday, Mar 12 at 12:00 pm.

Wednesday, March 3, 2010

Lecture 11: Digital Certificates (Mar 1st)

The lecture of Monday, March 1st consisted of the description and usage of Digital Certificates. Dr. Gunes began by explained what digital certificates were, and their relationship with Certificate Authorities (CA). CAs can verify someone's identity and issue them a unique certificate. However, there are multiple CAs, and an attacker could pose as his own CA. This leads to cross-verification, in which certificates are verified by a generally-trusted CA, or by a CA closer to a root CA.

Certificates can solve legal disputes since they provide proof of the integrity and origin of data. However, certificates expire, and their keys can be stolen. The certificate is then revoked, and placed on a Certificate Revocation List (CRL). This CRL should be checked every time user uses a public key to access a message. However, these lists are suspect to DOS attacks. Short-lived (1 day) certificates expire quickly.

Certificates are granted when a subscriber generates a public/private key pair and sends the public to a CA. CAs will verify the subscriber identity, then issue and publish a certificate with the public key. To use it, the subscriber signs a message with his private key. The receiver verifies the digital signature with sender's public key and asks for verification of signature from the CA repository.

The X.500 directory service has the X.509 extension for public key certificates. Message recipients are generally responsible for finding the necessary certificate. X.509 has a general certificate format that includes at least the algorithm, the CA, and who owns the key. X.500 requires each entry to have a unique name, so general information about the location/organization is used, as well as a student/employee ID number. X.509 version 3 contains extensions that allow for a set of extra information to be added to a CA's certificates.

Friday, February 26, 2010

Lecture 10: Key Exchange (Feb 24th)

On Wednesday the 24th, CS 450 consisted of a presentation from Chris and lecture for Dr. Gunes. First, Chris presented on Man in the Middle (MITM) attacks. Basically, the theory behind it is that an attacker could get between two entities in which are communicating and be able to read/modify the messages between them. Chris mainly went over how they worked, are prevented, and presented some of the tools one could use to help execute an attack (Cain and Able, Ettercap, Dsniff, etc). For the lecture, Dr. Gunes talked about ways to exchange Public/Private encryption keys. Some ways to exchange Public keys are by: publically announcing them to everyone, publically announcing them to users of a directory service, using a public key authority, or by using public key certs. You can also exchange private keys, and the two ways to do it (well, in which we talked about in class) where by using Merkle’s simple way or Diffie-Hellman’s more complex way. The major vulnerabilities in key exchanges are key forgery and man in the middle attacks.

Tuesday, February 23, 2010

Lecture 9: Intrusion Prevention Systems, Digital Signatures (Feb 22)

The class started with a presentation on Intrusion Prevention Systems from Justin Bode. Justing began with a funny video that emphasized the need for computer security and privacy in general. The talk started with discussing the need for IPS which flows from the limitations and weaknesses of Anti-viruses, Firewalls and ID systems. Justing then explained how IPS work, covering several methods of intrusion prevention such as heuristic analysis, sandboxing, kernel-based calls interception, etc. Finally, the different types of IPS (network-based, host-based, etc.) were showed and compared according to their strengths and weaknesses.

Dr. Gunes traditionally started the lecture with the review of previous class materials and briefly went through hashing algorithms. Lecture proceeded with the introduction of the topic of Digital Signatures. Digital Signature is an indication of the signer's agreement with contents of an electronic document (similar to signatures on physical documents). The two necessary properties of a digital signature were said to be unforgeability (signer protection) and authenticity (seller protection). Digital signatures are also non-alterable (signed document is non-modifiable without invalidating the signature) and non-reusable (signature is unique to document). An important property of an electronic signature is that it is verifiable by any user.

Some implementation details were given. RSA encryption system was identified to be appropriate to implement a digital signature system. The general mechanism to generate a signature is to pass the message through a redundancy function and encrypt such message with your private key. To verify the signature, one should use your public key to decrypt the message and pass it through a reverse of the redundancy function. Redundancy function must be chosen carefully, as a poor redundancy function can make it easy to forge random signed messages by unauthorized parties.

The method discussed above only provides authenticity, not privacy. To add privacy protection, it is possible to further encrypt the message with a public key of the receiver, so he is the only one who would be able to decrypt it (with his private key).

In conclusion of the lecture, an example of digitally signing a document was given. In the example, message digest was encrypted and send as a key along with the original message. The receiver could verify the authenticity of the signature, but not modify the original message without invalidating the signature.

Friday, February 19, 2010

Lecture 8: Secure Hash Algorithm (Feb 17)

Today’s class started out differently, the projector in the normal classroom was missing and we were moved to another room without a projector. Dr. Gunes first reviewed AES and cryptographic hash functions from lecture 7. He then went over Secure Hash Algorithm (SHA) in detail. There are currently three different versions of SHA. They are SHA-0, SHA-1, and SHA-2. Submissions for an open competition for SHA-3 were due in October 2008 and publication of the new standard is scheduled for 2012. To implement SHA, six steps are required. They can be found in detail in the lecture slides. SHA-0 and SHA-1 both give a 160-bit message digest. SHA-2 has a variable digest size of 224, 256, 384, and 512 bits. Dr. Gunes also mentioned that when submitting homeworks, try not to include images because it takes much longer to print.

Wednesday, February 17, 2010

Cryptosystem Lab 1

I have uploaded the first lab assignment. The deadline is Wednesday, Mar 3 at 11:30 pm.

You may post questions or comments under this blog entry.

Note: You may use an external function to test whether a number is a prime number.

Lecture 7: AES & Hash Functions (10 Feb)

In today's lecture, Dr. Gunes ,first, went over the RSA. Then, we began discussing Advanced Encryption Standard(AES) in more detail. He gave brief information about how the first AES is implemented and why DES was not sufficient. Then, he explained the architecture of AES and its requirements. In AES, each round consists of four main operations: byte substitution, shift row, mix columns and add round key. And the number of rounds needed is different for each bit length. He explained these four main steps with examples. Then, he compared DES and AES. One interesting thing was that DES still widely used around the world to protect sensitive online applications. Finally, he mentioned about cryptographic hash functions: Message Digest Functions(MDFs) and Message Authentication Codes(MACs).

Tuesday, February 16, 2010

Colloquium talk

There is a speaker from the Department of Homeland Security this Friday at 11 am in Ansari Business Building room 107. Dr. Lesley Blancas and Jalal Mapar will talk about "Research Funding Opportunities at the U.S. Department of Homeland Security".

Especially graduate students should plan to attend the talk.
See CSE Colloquia & Symposia page for details

Wednesday, February 10, 2010

Lecture 6: RSA (Feb 8)

This lecture we talked about RSA, which is a public key encryption algorithm that uses the problem of factoring large numbers to its advantage. This is a strong algorithm as it is said that is does not reside in the domain of NP-Complete, meaning the best known algorithm that can break this encryption is exponential. This encryption is based on two key numbers, which need to both be fairly large prime numbers, which are then multiplied together to get "N". We can derive the first of the two keys by finding a relatively prime number greater than one less than each of the two primes that were selected. We then can find the second key by finding the number that you can multiply with the first key that results in a 1 when the modulus of the product of the two primes each with one subtracted from it. A weakness of this encryption was pointed out, and that was if someone was ever able to factor "N", which is available to the public along with one of the two keys, the other key would be fairly easy to compute. We were then shown that encryption and decryption of a message using RSA was rather simple, take the message, raise it to the power of one of the keys, then take the modulus using "N". Decryption is the same process except the other key is used. It was pointed out that if we directly raise the message to the power of one of the keys, the result would be a fairly large number, however a method of "repeated squaring" can be used to achieve the same result without ever having to compute a large number. For example, say one of the keys is "20", which can be broken down into 2*10, the "10" can be broken down into 2*5, then 5=2*2+1, and finally 2=1*2. Starting with the smallest exponent (as the numbers just listed correspond to what the values should be) and using exponent rules calculating the final solution by repeated applying the modulus of "N" while building the exponent back up to "20" in this case results in never having to deal with any large numbers. We ended that day with a final note of the fact that RSA has no set key length and only RSA-140,155,160, and 200 have been cracked so far.

Monday, February 8, 2010

Student presentations

I have uploaded the link for class presentation schedule on WebCT under announcements.
Indicate your preferred date and topic on the spread sheet.

Saturday, February 6, 2010

Homework 1

Ch 2, Q 19: The question is mainly about key lengths. You may take any computer in production and estimate the number of operations for DES and AES in calculations.

Ch 12, Q 13: The question is asking which of the keys should be used.

Thursday, February 4, 2010

Lecture 5:DES & Rivest-Shamir-Adelman (Feb 3)

We began the lecture by discussing how DES came to be unreasonable for encrypting messages when it was easy enough to crack it. Then, triple DES was introduced to cover this weakness. Triple DES is used with two separate keys. The first is used to encrypt, the second is used to decrypt, and finally the first is used to encrypt again. This creates a better encryption because these two keys need to be ordered the right way to decrypt the message, creating an exponential increase in the encryption. We then went over computational complexity and what P, NP, NP-Hard, and NP-Complete problems. Finally, we went over what is behind the Rivest-Shamir-Adelman encryption which uses two large prime numbers.

Monday, February 1, 2010

Lecture 4: Data Encryption Standard (DES) (Feb 1)

In today’s lecture we discussed the Data Encryption Standard. The DES algorithm is a combination of substitutions and transpositions. Product ciphers are created by the combination of two weaker ciphers. DES uses an initial permutation followed by 16 cycles of different shifts and swaps that increase the cipher texts security. Most of the class was spent diving into the granular details of the steps in the algorithm. The algorithm uses a series of look up tables that are all published. DES is used on 64 bit blocks with 56 bit keys. Decryption is just the inverse of the encryption by applying the 16 cycles in reverse order. For more in depth information about the DES algorithm see the Lecture 4 Ppt. Also Wikipedia has a ton of great information about the standard.

Wednesday, January 27, 2010

Lecture 3: Entropy (Jan. 27)

Today's lecture explained how entropy can be used to make sure that you have a good encryption algorithm. DES was not explained today due to lack of time and will probably be taught next lecture. Shannon (sorry I can give you more information about him) came up with a method to mathematically describe the amount of information contained within a communication channel, bandwidth, ect. This is what we know as the entropy, it's the amount of information present. Shown below is Shannon's model.



This equation tells you how many different possibilities are possible. For example, if there is only one possible signal, the entropy is 0 meaning the only signal is the only possible signal. If there are 1024 possible signals, the entropy is 10 meaning 10 bits can describe all possible messages. So the main goal in encryption is to increase the entropy of the message and thereby increasing the complexity of the message.

Dr. Gunes also explained some characteristics of good ciphers. The main characteristics are using the amount of secrecy that you need, the keys and enciphering algorithm should be simple, the process should be simple, errors shouldn't propagate, and the size of the enciphered text should be the same size or smaller than the original.

The last thing that was talked about is the conpect of confusion and diffusion. Confusion means that there isn't an easy relation between the plaintext and the ciphertext. This means that if you changed only one letter in the plaintext, you would have an entirely different ciphertext with many or all of the letters changed. Diffusion means that the plaintext should be spread all over the ciphertext. This means that someone would require access to most of the ciphertext in order to infer any kind of algorithm.

This is a brief summary of what was covered in lecture today.

Tuesday, January 26, 2010

A Short History of Cryptography

An interesting article on brief history of cryptography. The article indicates that even cryptography has been studied for a long time, only a few cyrptosystems can be used today to secure against current threats.

Lecture 2: Elementary Cryptography (Jan 25)



Cryptography


Goal
Its goal is to ensure communication security over insecure medium. And in the first lecture we had learned that the security fundamentally has three goals: Confidentiality, Availability and Integrity.


Main Components in Sending Messages
Sender
Medium <===> Intruder
Receiver


Intruder can
Interrupt (make an asset unavailable, unusable) thus breaks Availability
Intercept (gain access to the asset) thus breaks Confidentiality
Modify (tamper with an asset) thus breaks Integrity
Fabricate (create objects) thus breaks Integrity


Approaches to Secure Communication


Steganography
  • Hide the existence of the message (Remember picture in picture in the slides !)
Cryptography
  • Hide the meaning of the message (Message is there but what is it ?)


Secret Writing
Make the message difficult to be read, modified or fabricated


Encryption is transforming plain text to cipher text :  C = E(c), where E is encryption rule
Decryption is transforming cipher text to plain text :  P = D(c), where D is decryption rule


Cryptosystem
Sender encrypts the original plain text ===> cipher text flies over the medium (Intruder does not have access to the plain text) ===> Receiver decrypts the cipher text


Cryptosystem helps us by providing the privacy and the integrity.


Encryption


Keyless
No key is used (algorithm doesn't take any parameters) in encryption or decryption.


Symmetric Key
The same key used in both encryption and decryption.


Asymmetric Key
Two different keys are used in encryption and decryption.


We do not use very strong keys (such as 1 million bit ) due to the computational cost for encryption and decryption


Cryptanalysis 


Cryptanalysis is the deduction of the original meaning from the cipher text by coming up with the decryption algorithm.


Ciphers
Important Note on Notation:
From now on UPPERCASE means PLAINTEXT, and lowercase denotes ciphertext


Substitution Ciphers are done by substituting each symbol by some other symbol.
E.g. Ceaser Cipher, Permutation.


Ceaser just substitutes every letter in the alphabet with another letter where there are always "n" letters in between them. For example, (for n==2) If A becomes d, then B becomes e.


Permutation is another way of substitution where each symbol is mapped to some other symbol without following a rule.


Cryptanalysis of Substitution Ciphers
Since
  • Break (blank character), and repeated letters are preserved, 
  • We can use clues like short words, 
  • Knowledge of language simplify it (e.g. E,T,O,A occur far more than J,Q,X,Z)
  • We can use brute force attach (26! possibilities for permutation)
it is easy to break.


Solution
We can avoid regularity if a symbol in plain text is transformed to different symbols at different occurrences. We can do that by using one-time pads where the receiver and the sender have identical pads.
Plaintext
V     E    R   N   A   M   C     I    P   H   E   R
21   4    17  13  0   12  2     8   15  7    4   17
Random numbers
76  48  16  82  44  3   58  11  60  5   48  88
Sum
97  52  33  95  44  15  60  19  75  12  52  105
Sum mod 26
19   0    7   17  18  15  8   19  23  12  0   1
Ciphertext
 t    a    h    r   s    p     i    t    x    m   a   b



Difficulties in practice of using one-time pads
Both sender and the receiver need access to identical objects such as telephone book
Since the phone book is not completely random but instead consists of high frequency letters just as the plain text, then for example, for the standard English case, the probability that the key and plain text letter is either A,E,O,T,N or I is 0.25.


Transposition


Transposition Ciphers are done by rearranging the places of the symbols
Here is an example to columnar transposition:

THIS IS A MESSAGE TO SHOW HOW A COLMUNAR TRANSPOSITION WORKS


   T  H   I   S   I
   S  A  M  E  S
   S  A  G  E  T
   O  S  H  O  W
   H  O  W  A  C
   O  L  M  U  N
   A  R  T  R  A
   N  S  P  O  S
   I    T   I   O  N
   W  O  R  K  S

  tssoh oaniw haaso lrsto imghw utpir seeoa mrook istwc nasna



This is also easy to break since the frequency distribution technique can be applied and also the pattern of transposition can be identified easily.

Thursday, January 21, 2010

TRUST summer schools

There are three summer schools organized by Team for Research in Ubiquitous Secure Technology. If you are interested in security related research these are great opportunities. Note that each have some restrictions on who may apply.

Research Experiences for Undergraduates

Women’s Institute in Summer Enrichment

Summer Experience, Colloquium and Research in Information Technology