Posted On:June 2018 - AppFerret

standard

(Very) Basic Elliptic Curve Cryptography

2018-06-25 - By 

This is going to be a basic introduction to elliptic curve cryptography. I will assume most of my audience is here to gain an understanding of why ECC is an effective cryptographic tool and the basics of why it works. My goal is to explain it in a general sense, I will be omitting proofs and implementation details and instead focusing on the high-level principles of what makes it work.

What It’s For?

ECC is a way to encrypt data so that only specific people can decrypt it. This has several obvious real life use cases, but the main usage is in encrypting internet data and traffic. For instance, ECC can be used to ensure that when an email is sent, no one but the recipient can read the message.


ECC is a type of Public Key Cryptography

There are many types of public key cryptography, and Elliptic Curve Cryptography is just one flavor. Others algorithms include RSA, Diffie-Helman, etc. I’m going to give a very simple background of public key cryptography in general as a starting point so we can discuss ECC and build on top of these ideas. Please by all means go study more in depth on public key cryptography when you have the time.

As seen below, public key cryptography allows the following to happen:

http://itlaw.wikia.com/wiki/Key_pair

The graphic shows two keys, a public key and a private key. These keys are used to encrypt and decrypt data so that anyone in the world can look at the encrypted data while it is being transmitted, and be unable to read the message.

Let’s pretend that Facebook is going to receive a private post from Donald Trump. Facebook needs to be able to ensure that when the President sends his post over the internet, no one in the middle (Like the NSA, or internet service provider) can read the message. The entire exchange using Public Key Cryptography would go like this:

  • Donald Trump Notifies Facebook that he wants to send them a private post
  • Facebook sends Donald Trump their public key
  • Donald Trump uses the the public key to encrypt his post:

“I love Fox and Friends” + Public Key = “s80s1s9sadjds9s”

  • Donald Trump sends only the encrypted message to Facebook
  • Facebook uses their private key to decrypt the message:

“s80s1s9sadjds9s” + Private Key= “I love Fox and Friends”

As you can see this is a very useful technology. Here are some key points.

  • The public key can be sent to anyone. It is public.
  • The private key must be kept safe, because if someone in the middle were to get the private key they could decrypt the messages.
  • Computers can very quickly use the public key to encrypt a message, and the private key to decrypt a message.
  • Computers require a very long time (millions of years) to derive the original data from the encrypted message if they don’t have the private key.

How it Works: The Trapdoor Function

The crux of all public key cryptographic algorithms is that they each have their own unique trapdoor functionA trapdoor function is a function that can only be computed one way, or at least can only be computed one way easily (in less than millions of years using modern computers).

Not a trapdoor function: A + B = C

If I’m given A and B I can compute C. The problem is that if I’m given B and C I can also compute A. This is not a trapdoor function.

Trapdoor function:

“I love Fox and Friends” + Public Key = “s80s1s9sadjds9s”

If given “I love Fox and Friends” and the public key, I can produce “s80s1s9sadjds9s”, but if given “s80s1s9sadjds9s” and the Public Key I can’t produce “I love Fox and Friends”

In RSA (Probably the most popular public key system) the trapdoor function relies on how hard it is to factor large numbers into their prime factors.

Public Key: 944,871,836,856,449,473

Private Key: 961,748,941 and 982,451,653

In the example above the public key is a very large number, and the private key is the two prime factors of the public key. This is a good example of a Trapdoor Function because it is very easy to multiply the numbers in the private key together to get the public key, but if all you have is the public key it will take a very long time using a computer to re-create the private key.

Note: In real cryptography the private key would need to be 200+ digits long to be considered secure.

What Makes Elliptic Curve Cryptography Different?

ECC is used for the exact same reasons as RSA. It simply generates a public and private key and allows two parties to communicate securely. There is one major advantage however that ECC offers over RSA. A 256 bit key in ECC offers about the same security as 3072 bit key using RSA. This means that in systems with limited resources such as smartphones, embedded computers, cryptocurrency networks, it uses less than 10% of the hard disk space and bandwidth required using RSA.

ECC’s Trapdoor Function

This is probably why most of you are here. This is what makes ECC special and different from RSA. The trapdoor function is similar to a mathematical game of pool. We start with a certain point on the curve. We use a function (called the dot function) to find a new point. We keep repeating the dot function to hop around the curve until we finally end up at our last point. Lets walk through the algorithm.

 

  • Starting at A:
  • A dot B = -C (Draw a line from A to B and it intersects at -C)
  • Reflect across the X axis from -C to C
  • A dot C = -D (Draw a line from A to C and it intersects -D)
  • Reflect across the X axis from -D to D
  • A dot D = -E (Draw a line from A to D and it intersects -E)
  • Reflect across the X axis from -E to E

This is a great trapdoor function because if you know where the starting point (A) is and how many hops are required to get to the ending point (E), it is very easy to find the ending point. On the other hand, if all you know is where the starting point and ending point are, it is nearly impossible to find how many hops it took to get there.

Public Key: Starting Point A, Ending Point E

Private Key: Number of hops from A to E

Questions?

See remaining text and images in the original article here.

 


standard

Internet is Projected to Surpass TV in 2019

2018-06-19 - By 

Not too long ago, television was the clear favorite source of media for consumers, but the end of that decades-long stretch is imminent, according to data from digital media agency Zenith.

As this chart from Statista shows, the gap has long been narrowing between the number of minutes consumers spend watching TV every day and the amount of time they spend on mobile and desktop internet consumption. By 2020, Zenith’s forecast shows that daily internet consumption will surpass daily television consumption for the first time.

The invention of various social media platforms, the availability of shows on mobile, faster internet, more advanced smartphones, and more digestible content tailored to those smartphones have all played a big role in that reversal. These advancements have also marginally increased the total amount of minutes each consumer spends watching TV and on the internet: Nine years ago, internet and TV minutes totaled around four hours — by 2020 it’ll be almost six hours.

 

Original article here.

 


standard

Happy 40th Anniversary to the Original Intel 8086 and the x86 Architecture

2018-06-08 - By 

Forty years ago today, Intel launched the original 8086 microprocessor — the grandfather of every x86 CPU ever built, including the ones we use now. This, it must be noted, is more or less the opposite outcome of what everyone expected at the time, including Intel.

According to Stephen P. Morse, who led the 8086 development effort, the new CPU “was intended to be short-lived and not have any successors.” Intel’s original goal with the 8086 was to improve overall performance relative to previous products while retaining source compatibility with earlier products (meaning assembly language for the 8008, 8080, or 8085 could be run on the 8086 after being recompiled). It offered faster overall performance than the 8080 or 8085 and could address up to 1MB of RAM (the 8085 topped out at 64KB). It contained eight 16-bit registers, which is where the x86 abbreviation comes from in the first place, and was originally offered at a clock speed of 5MHz (later versions were clocked as high as 10MHz).

Morse had experience in software as well as hardware and, as this historical retrospectivemakes clear, made decisions intended to make it easy to maintain backwards compatibility with earlier Intel products. He even notes that had he known he was inventing an architecture that would power computing for the next 40 years, he would’ve done some things differently, including using a symmetric register structure and avoiding segmented addressing. Initially, the 8086 was intended to be a stopgap product while Intel worked feverishly to finish its real next-generation microprocessor — the iAPX 432, Intel’s first 32-bit microprocessor. When sales of the 8086 began to slip in 1979, Intel made the decision to launch a massive marketing operation around the chip, dubbed Operation Crush. The goal? Drive adoption of the 8086 over and above competing products made by Motorola and Zilog (the latter founded by former Intel employees, including Federico Faggin, lead architect on the first microprocessor, Intel’s 4004). Project Crush was quite successful and is credited with spurring IBM to adopt the 8088 (a cut-down 8086 with an 8-bit bus) for the first IBM PC.

One might expect, given the x86 architecture’s historic domination of the computing industry, that the chip that launched the revolution would have been a towering achievement or quantum leap above the competition. The truth is more prosaic. The 8086 was a solid CPU core built by intelligent architects backed up by a strong marketing campaign. The computer revolution it helped to launch, on the other hand, transformed the world.

All that said, there’s one other point want to touch on.

It’s Been 40 Years. Why Are We Still Using x86 CPUs?

This is a simple question with a rather complex answer. First, in a number of very real senses, we aren’t really using x86 CPUs anymore. The original 8086 was a chip with 29,000 transistors. Modern chips have transistor counts in the billions. The modern CPU manufacturing process bears little resemblance to the nMOS manufacturing process used to implement the original design in 1978. The materials used to construct the CPU are themselves very different and the advent of EUV (Extreme Ultraviolet Lithography) will transform this process even more.

Modern x86 chips translate x86 microcode into internal micro-ops for more efficient execution. They implement features like out-of-order execution and speculative executionto improve performance and limit the impact of slow memory buses (relative to CPU clocks) with multiple layers of cache and capabilities like branch prediction. People often ask “Why are we still using x86 CPUs?” as if this was analogous to “Why are we still using the 8086?” The honest answer is: We aren’t. An 8086 from 1978 and a Core i7-8700K are both CPUs, just as a Model T and 2018 Toyota are both cars — but they don’t exactly share much beyond that most basic classification.

Furthermore, Intel tried to replace or supplant the x86 architecture multiple times. The iAPX 432, Intel i960, Intel i860, and Intel Itanium were all intended to supplant x86. Far from refusing to consider alternatives, Intel literally spent billions of dollars over multiple decades to bring those alternative visions to life. The x86 architecture won these fights — but it didn’t just win them because it offered backwards compatibility. We spoke to Intel Fellow Ronak Singhal for this article, who pointed out a facet of the issue I honestly hadn’t considered before. In each case, x86 continued to win out against the architectures Intel intended to replace it because the engineers working on those x86 processors found ways to extend and improve the performance of Intel’s existing microarchitectures, often beyond what even Intel engineers had thought possible years earlier.

Is there a penalty for continuing to support the original x86 ISA? There is — but today, it’s a tiny one. The original Pentium may have devoted up to 30 percent of its transistors to backwards compatibility, and the Pentium Pro’s bet on out-of-order execution and internal micro-ops chewed up a huge amount of die space and power, but these bets paid off. Today, the capabilities that consumed huge resources on older chips are a single-digit percent or less of the power or die area budget of a modern microprocessor. Comparisons between a variety of ISAs have demonstrated that architectural design decisions have a much larger impact on performance efficiency and power consumption than ISA does, at least above the microcontroller level.

Will we still be using x86 chips 40 years from now? I have no idea. I doubt any of the Intel CPU designers that built the 8086 back in 1978 thought their core would go on to power most of the personal computing revolution of the 1980s and 1990s. But Intel’s recent moves into fields like AI, machine learning, and cloud data centers are proof that the x86 family of CPUs isn’t done evolving. No matter what happens in the future, 40 years of success are a tremendous legacy for one small chip — especially one which, as Stephen Moore says, “was intended to be short-lived and not have any successors.”

Now read: A Brief History of Intel CPUs, Part 1: The 4004 to the Pentium Pro

Original article here.

 


standard

Microsoft has acquired GitHub for $7.5B in stock

2018-06-04 - By 

After a week of rumors, Microsoft  today confirmed that it has acquired GitHubthe popular Git-based code sharing and collaboration service. The price of the acquisition was $7.5 billion in Microsoft stock. GitHub raised $350 million and we know that the company was valued at about $2 billion in 2015.

Former Xamarin CEO Nat Friedman (and now Microsoft corporate vice president) will become GitHub’s CEO. GitHub founder and former CEO Chris Wanstrath will become a Microsoft technical fellow and work on strategic software initiatives. Wanstrath had retaken his CEO role after his co-founder Tom Preston-Werner resigned following a harassment investigation in 2014.

The fact that Microsoft is installing a new CEO for GitHub is a clear sign that the company’s approach to integrating GitHub will be similar to hit it is working with LinkedIn. “GitHub will retain its developer-first ethos and will operate independently to provide an open platform for all developers in all industries,” a Microsoft spokesperson told us.

GitHub says that as of March 2018, there were 28 million developers in its community, and 85 million code repositories, making it the largest host of source code globally and a cornerstone of how many in the tech world build software.

But despite its popularity with enterprise users, individual developers and open source projects, GitHub has never turned a profit and chances are that the company decided that an acquisition was preferable over trying to IPO.

GitHub’s main revenue source today is paid accounts, which allows for private repositories and a number of other features that enterprises need, with pricing ranging from $7 per user per month to $21/user/month. Those building public and open source projects can use it for free.

While numerous large enterprises use GitHub as their code sharing service of choice, it also faces quite a bit of competition in this space thanks to products like GitLab and Atlassian’s Bitbucket, as well as a wide range of other enterprise-centric code hosting tools.

Microsoft is acquiring GitHub because it’s a perfect fit for its own ambitions to be the go-to platform for every developer, and every developer need, no matter the platform.

Microsoft has long embraced the Git protocol and is using it in its current Visual Studio Team Services product, which itself used to compete with GitHub’s enterprise service. Knowing GitHub’s position with developers, Microsoft has also leaned on the service quite a bit itself, too and some in the company already claim it is the biggest contributor to GitHub today.

Yet while Microsoft’s stance toward open source has changed over the last few years, many open source developers will keep a very close look at what the company will do with GitHub after the acquisition. That’s because there is a lot of distrust of Microsoft in this cohort, which is understandable given Microsoft’s history.

In fact, TechCrunch received a tip on Friday, which noted not only that the deal had already closed, but that open source software maintainers were already eyeing up alternatives and looking potentially to abandon GitHub in the wake of the deal. Some developers (not just those working in open source) were not wasting timeeven to wait for a confirmation of the deal before migrating.

While GitHub is home to more than just open source software, if such a migration came to pass, it would be a very bad look both for GitHub and Microsoft. And, it would a particularly ironic turn, given the very origins of Git: the versioning control system was created by Linus Torvalds in 2005 when he was working on development of the Linux kernel, in part as a response to a previous system, BitKeeper, changing its terms away from being free to use.

The new Microsoft under CEO Satya Nadella strikes us as a very different company from the Microsoft of ten years ago — especially given that the new Microsoft has embraced open source — but it’s hard to forget its earlier history of trying to suppress Linux.

“Microsoft is a developer-first company, and by joining forces with GitHub we strengthen our commitment to developer freedom, openness and innovation,” said Nadella in today’s announcement. “We recognize the community responsibility we take on with this agreement and will do our best work to empower every developer to build, innovate and solve the world’s most pressing challenges.”

Yet at the same time, it’s worth remembering that Microsoft is now a member of the Linux Foundation and regularly backs a number of open source projects. And Windows now has the Linux subsystem while VS Code, the company’s free code editing tool is open source and available on GitHub, as are .NET Core and numerous other Microsoft-led projects.

And many in the company were defending Microsoft’s commitment to GitHub and its principles, even before the deal was announced.

Still, you can’t help but wonder how Microsoft might leverage GitHub within its wider business strategy, which could see the company build stronger bridges between GitHub and Azure, its cloud hosting service, and its wide array of software and collaboration products. Microsoft is no stranger to ingesting huge companies. One of them, LinkedIn, might be another area where Microsoft might explore synergies, specifically around areas like recruitment and online tutorials and education.

 

Original article here.

 


Site Search

Search
Exact matches only
Search in title
Search in content
Search in comments
Search in excerpt
Filter by Custom Post Type
 

BlogFerret

Help-Desk
X
Sign Up

Enter your email and Password

Log In

Enter your Username or email and password

Reset Password

Enter your email to reset your password

X
<-- script type="text/javascript">jQuery('#qt_popup_close').on('click', ppppop);