Sovereign Keys – An EFF Proposal

Sovereign Keys Introduction

Secure communication over the internet depends almost exclusively on Secure Sockets Layer (SSL) and/or Transport Layer Security (TLS).  In order to understand the Electronic Frontier Foundation’s (EFF) Sovereign Keys proposal, we need to take a closer look at how public key cryptography works as well as the public key infrastructure (PKI) that is in place to manage the public/private keys that are relied on for TLS.  After discussing public key cryptography and PKI, we will discuss current implementations of SSL/TLS and the numerous components therein: Domain Name System (DNS), Certificate Authorities (CA) and client/server implementation.  Finally, we will discuss the primary weaknesses in the current implementation, the Sovereign Keys proposal and how it aims to remedy those weaknesses.

To begin, we need to understand public key cryptography, which is also known as asymmetric cryptography.  In asymmetric cryptography there must exist two keys in which to provide for the desired cryptographic functions – namely the encryption/decryption of plaintext/ciphertext or the generation/validation of a digital signature.  This differs from symmetric cryptography, which only utilizes a single key for desired cryptographic functions.  The two keys utilized in asymmetric cryptography are commonly referred to as the public and private key.  As the name implies, the public key is ultimately made public and utilized in encrypting plaintext and verifying digital signatures.  Public keys are therefore made available to be public and not protected as a private key is.  Private keys, on the other hand, are utilized to decrypt ciphertext (ciphertext is the result of running an encryption algorithm on plaintext) and to generate a digital signature. (“Introduction to Public-Key”, 2014)(“Public-key Cryptography”, 2014)

Continue reading

Big Data: Issues and Challenges

Introduction

big-data-2
To begin a discussion on the issue of Big Data, it is worthwhile to first define the term “Big Data”.  According to [1], Big Data is defined as “technologies and initiatives that involve data that is too diverse, fast-changing or massive for conventional technologies, skills and infrastructure to address efficiently”.  This serves as a good definition as it enumerates the key issues in dealing with Big Data, namely, data that exhibits the following characteristics: it is fast-changing and/or massive, it does not fit into conventional data storage systems (i.e. relational database system) and is generated and captured rapidly [1].  While I believe this definition to be an accurate perception of Big Data in the scientific and business communities, it is worth noting that Big Data as a discipline is still in it’s infancy and thus open to different interpretations.  A quick study on the etymology of the word Big Data provides great insight into this (see [2]).  To further our understanding of Big Data, we will take a look at each of the main characteristics of Big Data as previously defined and discuss some of the primary issues that they introduce. Continue reading

Shellshock Bash Bug

Shellshock Bash Bug

Everyone’s probably heard of the Shellshock Bash bug by now, which was announced with CVE-2014-6271 on September 24, 2014.  According to the announcement:

GNU Bash through 4.3 processes trailing strings after function definitions in the values of environment variables, which allows remote attackers to execute arbitrary code via a crafted environment, as demonstrated by vectors involving the ForceCommand feature in OpenSSH sshd, the mod_cgi and mod_cgid modules in the Apache HTTP Server, scripts executed by unspecified DHCP clients, and other situations in which setting the environment occurs across a privilege boundary from Bash execution, aka “ShellShock.”

Continue reading

Introduction to Bitmap Indexes

Introduction to Bitmap Indexes

To alleviate confusion, I will refer to bit-indexing and bit mapped indexing as a bitmap index.  Through my research I have seen these terms used interchangeably to define the same concept – bitmap indexes.

What is a bitmap index and how do they work?

Bitmap indexes are a mechanism, largely employed by Oracle databases, to increase search performance on large data sets.  Bitmap indexes are most effective when applied to columns that exhibit low cardinality.  In this case, cardinality represents the amount of unique values that a column may contain.  For example, a column called “Active” on a user account table would likely only contain two values: true or false (or active and disabled).  Regardless of the total number of tuples,  the values contained in this column is only two possible values.  This column exhibits low cardinality.  A column that exhibits high cardinality benefit less from creating an index on them and become candidates for a primary key – that is if each tuple contains a unique value. Continue reading

Drupal SQL Injection Vulnerability

Drupal SQL Injection – SA-CORE-2014-005

This posting discusses the Drupal SQL Injection vulnerability from https://www.drupal.org/SA-CORE-2014-005, which affected Drupal versions 7.0 – 7.31.  This security announcement was released on October 15, 2014 and was marked as Highly Critical.  By October 29th, the Drupal Security Team posted a follow-on Public Service Announcement (PSA), https://www.drupal.org/PSA-2014-003, which warned that all Drupal sites should be considered compromised if not patched by Oct 15th, 11pm UTC – only seven hours after the initial security release! Continue reading