Posted By:CRS, Author at AppFerret

standard

A Quick Introduction to Text Summarization in Machine Learning

2018-09-19 - By 

Text summarization refers to the technique of shortening long pieces of text. The intention is to create a coherent and fluent summary having only the main points outlined in the document.

Automatic text summarization is a common problem in machine learning and natural language processing (NLP).

Skyhoshi, who is a U.S.-based machine learning expert with 13 years of experience and currently teaches people his skills, says that “the technique has proved to be critical in quickly and accurately summarizing voluminous texts, something which could be expensive and time consuming if done without machines.”

Machine learning models are usually trained to understand documents and distill the useful information before outputting the required summarized texts.

What’s the need for text summarization?

Propelled by the modern technological innovations, data is to this century what oil was to the previous one. Today, our world is parachuted by the gathering and dissemination of huge amounts of data.

In fact, the International Data Corporation (IDC) projects that the total amount of digital data circulating annually around the world would sprout from 4.4 zettabytes in 2013 to hit 180 zettabytes in 2025. That’s a lot of data!

With such a big amount of data circulating in the digital space, there is need to develop machine learning algorithms that can automatically shorten longer texts and deliver accurate summaries that can fluently pass the intended messages.

Furthermore, applying text summarization reduces reading time, accelerates the process of researching for information, and increases the amount of information that can fit in an area.

What are the main approaches to automatic summarization?

There are two main types of how to summarize text in NLP:

  • Extraction-based summarization

The extractive text summarization technique involves pulling keyphrases from the source document and combining them to make a summary. The extraction is made according to the defined metric without making any changes to the texts.

Here is an example:

Source text: Joseph and Mary rode on a donkey to attend the annual event inJerusalem. In the city, Mary gave birth to a child named Jesus.

Extractive summary: Joseph and Mary attend event Jerusalem. Mary birth Jesus.

As you can see above, the words in bold have been extracted and joined to create a summary — although sometimes the summary can be grammatically strange.

  • Abstraction-based summarization

The abstraction technique entails paraphrasing and shortening parts of the source document. When abstraction is applied for text summarization in deep learning problems, it can overcome the grammar inconsistencies of the extractive method.

The abstractive text summarization algorithms create new phrases and sentences that relay the most useful information from the original text — just like humans do.

Therefore, abstraction performs better than extraction. However, the text summarization algorithms required to do abstraction are more difficult to develop; that’s why the use of extraction is still popular.

Here is an example:

Abstractive summary: Joseph and Mary came to Jerusalem where Jesus was born.

How does a text summarization algorithm work?

Usually, text summarization in NLP is treated as a supervised machine learning problem (where future outcomes are predicted based on provided data).

Typically, here is how using the extraction-based approach to summarize texts can work:

1. Introduce a method to extract the merited keyphrases from the source document. For example, you can use part-of-speech tagging, words sequences, or other linguistic patterns to identify the keyphrases.

2. Gather text documents with positively-labeled keyphrases. The keyphrases should be compatible to the stipulated extraction technique. To increase accuracy, you can also create negatively-labeled keyphrases.

3. Train a binary machine learning classifier to make the text summarization. Some of the features you can use include:

  • Length of the keyphrase
  • Frequency of the keyphrase
  • The most recurring word in the keyphrase
  • Number of characters in the keyphrase

4. Finally, in the test phrase, create all the keyphrase words and sentences and carry out classification for them.

Summary

Text summarization is an interesting machine learning field that is increasingly gaining traction. As research in this area continues, we can expect to see breakthroughs that will assist in fluently and accurately shortening long text documents.

Original article here.

 


standard

Communicate with Alexa Devices Using Sign Language

2018-07-16 - By 

Many have found Amazon’s Alexa devices to be helpful in their homes, but if you can’t physically speak, it’s a challenge to communicate with these things. So, Abhishek Singh used TensorFlow to train a program to recognize sign language and communicate with Alexa without voice.

Nice.

 


standard

Is the Bitcoin network an oligarchy?

2018-07-03 - By 

Cryptocurrencies like Bitcoin can be analysed because every transaction is traceable. This means that they are an attractive system for physicists to study.

In a paper published in The European Physical Journal B, Leonardo Ermann from the National Commission for Atomic Energy in Buenos Aires, Argentina, and colleagues from the University of Toulouse, France, have examined the  of the Bitcoin-owner community by looking at the transactions of this cryptocurrency between 2009 and 2013. The team’s findings reveal that Bitcoin owners are close to an oligarchy with hidden communities whose members are highly interconnected. This research has implications for our understanding of these emerging cryptocurrency communities in our society—as usual bank transactions are typically deeply hidden from the public eye. They could also be helpful to computer scientists, economists and politicians who could better understand handle them.

As part of their study, the authors construct a blueprint of this —the so-called Google matrix. It helps them calculate key characteristics of the network, such as PageRank—known for underlining the Google search engine—which highlights the influence of ingoing transactions between individual Bitcoin owners. The author also rely on CheiRank, which highlights the influence of outgoing transactions between owners.

Based on such data, they identify an unusual circle-type structure within the range of transactions between Bitcoins owners. Until now, such a structure has never been reported for real networks. This means that there are hidden communities of nodes linking the currency owners through a long series of transactions.

Based on another characteristic of the network of transactions, the authors have also found that the main portion of the network’s wealth is distributed between a small fraction of users.

Original article here.


standard

(Very) Basic Elliptic Curve Cryptography

2018-06-25 - By 

This is going to be a basic introduction to elliptic curve cryptography. I will assume most of my audience is here to gain an understanding of why ECC is an effective cryptographic tool and the basics of why it works. My goal is to explain it in a general sense, I will be omitting proofs and implementation details and instead focusing on the high-level principles of what makes it work.

What It’s For?

ECC is a way to encrypt data so that only specific people can decrypt it. This has several obvious real life use cases, but the main usage is in encrypting internet data and traffic. For instance, ECC can be used to ensure that when an email is sent, no one but the recipient can read the message.


ECC is a type of Public Key Cryptography

There are many types of public key cryptography, and Elliptic Curve Cryptography is just one flavor. Others algorithms include RSA, Diffie-Helman, etc. I’m going to give a very simple background of public key cryptography in general as a starting point so we can discuss ECC and build on top of these ideas. Please by all means go study more in depth on public key cryptography when you have the time.

As seen below, public key cryptography allows the following to happen:

http://itlaw.wikia.com/wiki/Key_pair

The graphic shows two keys, a public key and a private key. These keys are used to encrypt and decrypt data so that anyone in the world can look at the encrypted data while it is being transmitted, and be unable to read the message.

Let’s pretend that Facebook is going to receive a private post from Donald Trump. Facebook needs to be able to ensure that when the President sends his post over the internet, no one in the middle (Like the NSA, or internet service provider) can read the message. The entire exchange using Public Key Cryptography would go like this:

  • Donald Trump Notifies Facebook that he wants to send them a private post
  • Facebook sends Donald Trump their public key
  • Donald Trump uses the the public key to encrypt his post:

“I love Fox and Friends” + Public Key = “s80s1s9sadjds9s”

  • Donald Trump sends only the encrypted message to Facebook
  • Facebook uses their private key to decrypt the message:

“s80s1s9sadjds9s” + Private Key= “I love Fox and Friends”

As you can see this is a very useful technology. Here are some key points.

  • The public key can be sent to anyone. It is public.
  • The private key must be kept safe, because if someone in the middle were to get the private key they could decrypt the messages.
  • Computers can very quickly use the public key to encrypt a message, and the private key to decrypt a message.
  • Computers require a very long time (millions of years) to derive the original data from the encrypted message if they don’t have the private key.

How it Works: The Trapdoor Function

The crux of all public key cryptographic algorithms is that they each have their own unique trapdoor functionA trapdoor function is a function that can only be computed one way, or at least can only be computed one way easily (in less than millions of years using modern computers).

Not a trapdoor function: A + B = C

If I’m given A and B I can compute C. The problem is that if I’m given B and C I can also compute A. This is not a trapdoor function.

Trapdoor function:

“I love Fox and Friends” + Public Key = “s80s1s9sadjds9s”

If given “I love Fox and Friends” and the public key, I can produce “s80s1s9sadjds9s”, but if given “s80s1s9sadjds9s” and the Public Key I can’t produce “I love Fox and Friends”

In RSA (Probably the most popular public key system) the trapdoor function relies on how hard it is to factor large numbers into their prime factors.

Public Key: 944,871,836,856,449,473

Private Key: 961,748,941 and 982,451,653

In the example above the public key is a very large number, and the private key is the two prime factors of the public key. This is a good example of a Trapdoor Function because it is very easy to multiply the numbers in the private key together to get the public key, but if all you have is the public key it will take a very long time using a computer to re-create the private key.

Note: In real cryptography the private key would need to be 200+ digits long to be considered secure.

What Makes Elliptic Curve Cryptography Different?

ECC is used for the exact same reasons as RSA. It simply generates a public and private key and allows two parties to communicate securely. There is one major advantage however that ECC offers over RSA. A 256 bit key in ECC offers about the same security as 3072 bit key using RSA. This means that in systems with limited resources such as smartphones, embedded computers, cryptocurrency networks, it uses less than 10% of the hard disk space and bandwidth required using RSA.

ECC’s Trapdoor Function

This is probably why most of you are here. This is what makes ECC special and different from RSA. The trapdoor function is similar to a mathematical game of pool. We start with a certain point on the curve. We use a function (called the dot function) to find a new point. We keep repeating the dot function to hop around the curve until we finally end up at our last point. Lets walk through the algorithm.

 

  • Starting at A:
  • A dot B = -C (Draw a line from A to B and it intersects at -C)
  • Reflect across the X axis from -C to C
  • A dot C = -D (Draw a line from A to C and it intersects -D)
  • Reflect across the X axis from -D to D
  • A dot D = -E (Draw a line from A to D and it intersects -E)
  • Reflect across the X axis from -E to E

This is a great trapdoor function because if you know where the starting point (A) is and how many hops are required to get to the ending point (E), it is very easy to find the ending point. On the other hand, if all you know is where the starting point and ending point are, it is nearly impossible to find how many hops it took to get there.

Public Key: Starting Point A, Ending Point E

Private Key: Number of hops from A to E

Questions?

See remaining text and images in the original article here.

 


standard

Internet is Projected to Surpass TV in 2019

2018-06-19 - By 

Not too long ago, television was the clear favorite source of media for consumers, but the end of that decades-long stretch is imminent, according to data from digital media agency Zenith.

As this chart from Statista shows, the gap has long been narrowing between the number of minutes consumers spend watching TV every day and the amount of time they spend on mobile and desktop internet consumption. By 2020, Zenith’s forecast shows that daily internet consumption will surpass daily television consumption for the first time.

The invention of various social media platforms, the availability of shows on mobile, faster internet, more advanced smartphones, and more digestible content tailored to those smartphones have all played a big role in that reversal. These advancements have also marginally increased the total amount of minutes each consumer spends watching TV and on the internet: Nine years ago, internet and TV minutes totaled around four hours — by 2020 it’ll be almost six hours.

 

Original article here.

 


standard

Happy 40th Anniversary to the Original Intel 8086 and the x86 Architecture

2018-06-08 - By 

Forty years ago today, Intel launched the original 8086 microprocessor — the grandfather of every x86 CPU ever built, including the ones we use now. This, it must be noted, is more or less the opposite outcome of what everyone expected at the time, including Intel.

According to Stephen P. Morse, who led the 8086 development effort, the new CPU “was intended to be short-lived and not have any successors.” Intel’s original goal with the 8086 was to improve overall performance relative to previous products while retaining source compatibility with earlier products (meaning assembly language for the 8008, 8080, or 8085 could be run on the 8086 after being recompiled). It offered faster overall performance than the 8080 or 8085 and could address up to 1MB of RAM (the 8085 topped out at 64KB). It contained eight 16-bit registers, which is where the x86 abbreviation comes from in the first place, and was originally offered at a clock speed of 5MHz (later versions were clocked as high as 10MHz).

Morse had experience in software as well as hardware and, as this historical retrospectivemakes clear, made decisions intended to make it easy to maintain backwards compatibility with earlier Intel products. He even notes that had he known he was inventing an architecture that would power computing for the next 40 years, he would’ve done some things differently, including using a symmetric register structure and avoiding segmented addressing. Initially, the 8086 was intended to be a stopgap product while Intel worked feverishly to finish its real next-generation microprocessor — the iAPX 432, Intel’s first 32-bit microprocessor. When sales of the 8086 began to slip in 1979, Intel made the decision to launch a massive marketing operation around the chip, dubbed Operation Crush. The goal? Drive adoption of the 8086 over and above competing products made by Motorola and Zilog (the latter founded by former Intel employees, including Federico Faggin, lead architect on the first microprocessor, Intel’s 4004). Project Crush was quite successful and is credited with spurring IBM to adopt the 8088 (a cut-down 8086 with an 8-bit bus) for the first IBM PC.

One might expect, given the x86 architecture’s historic domination of the computing industry, that the chip that launched the revolution would have been a towering achievement or quantum leap above the competition. The truth is more prosaic. The 8086 was a solid CPU core built by intelligent architects backed up by a strong marketing campaign. The computer revolution it helped to launch, on the other hand, transformed the world.

All that said, there’s one other point want to touch on.

It’s Been 40 Years. Why Are We Still Using x86 CPUs?

This is a simple question with a rather complex answer. First, in a number of very real senses, we aren’t really using x86 CPUs anymore. The original 8086 was a chip with 29,000 transistors. Modern chips have transistor counts in the billions. The modern CPU manufacturing process bears little resemblance to the nMOS manufacturing process used to implement the original design in 1978. The materials used to construct the CPU are themselves very different and the advent of EUV (Extreme Ultraviolet Lithography) will transform this process even more.

Modern x86 chips translate x86 microcode into internal micro-ops for more efficient execution. They implement features like out-of-order execution and speculative executionto improve performance and limit the impact of slow memory buses (relative to CPU clocks) with multiple layers of cache and capabilities like branch prediction. People often ask “Why are we still using x86 CPUs?” as if this was analogous to “Why are we still using the 8086?” The honest answer is: We aren’t. An 8086 from 1978 and a Core i7-8700K are both CPUs, just as a Model T and 2018 Toyota are both cars — but they don’t exactly share much beyond that most basic classification.

Furthermore, Intel tried to replace or supplant the x86 architecture multiple times. The iAPX 432, Intel i960, Intel i860, and Intel Itanium were all intended to supplant x86. Far from refusing to consider alternatives, Intel literally spent billions of dollars over multiple decades to bring those alternative visions to life. The x86 architecture won these fights — but it didn’t just win them because it offered backwards compatibility. We spoke to Intel Fellow Ronak Singhal for this article, who pointed out a facet of the issue I honestly hadn’t considered before. In each case, x86 continued to win out against the architectures Intel intended to replace it because the engineers working on those x86 processors found ways to extend and improve the performance of Intel’s existing microarchitectures, often beyond what even Intel engineers had thought possible years earlier.

Is there a penalty for continuing to support the original x86 ISA? There is — but today, it’s a tiny one. The original Pentium may have devoted up to 30 percent of its transistors to backwards compatibility, and the Pentium Pro’s bet on out-of-order execution and internal micro-ops chewed up a huge amount of die space and power, but these bets paid off. Today, the capabilities that consumed huge resources on older chips are a single-digit percent or less of the power or die area budget of a modern microprocessor. Comparisons between a variety of ISAs have demonstrated that architectural design decisions have a much larger impact on performance efficiency and power consumption than ISA does, at least above the microcontroller level.

Will we still be using x86 chips 40 years from now? I have no idea. I doubt any of the Intel CPU designers that built the 8086 back in 1978 thought their core would go on to power most of the personal computing revolution of the 1980s and 1990s. But Intel’s recent moves into fields like AI, machine learning, and cloud data centers are proof that the x86 family of CPUs isn’t done evolving. No matter what happens in the future, 40 years of success are a tremendous legacy for one small chip — especially one which, as Stephen Moore says, “was intended to be short-lived and not have any successors.”

Now read: A Brief History of Intel CPUs, Part 1: The 4004 to the Pentium Pro

Original article here.

 


standard

Microsoft has acquired GitHub for $7.5B in stock

2018-06-04 - By 

After a week of rumors, Microsoft  today confirmed that it has acquired GitHubthe popular Git-based code sharing and collaboration service. The price of the acquisition was $7.5 billion in Microsoft stock. GitHub raised $350 million and we know that the company was valued at about $2 billion in 2015.

Former Xamarin CEO Nat Friedman (and now Microsoft corporate vice president) will become GitHub’s CEO. GitHub founder and former CEO Chris Wanstrath will become a Microsoft technical fellow and work on strategic software initiatives. Wanstrath had retaken his CEO role after his co-founder Tom Preston-Werner resigned following a harassment investigation in 2014.

The fact that Microsoft is installing a new CEO for GitHub is a clear sign that the company’s approach to integrating GitHub will be similar to hit it is working with LinkedIn. “GitHub will retain its developer-first ethos and will operate independently to provide an open platform for all developers in all industries,” a Microsoft spokesperson told us.

GitHub says that as of March 2018, there were 28 million developers in its community, and 85 million code repositories, making it the largest host of source code globally and a cornerstone of how many in the tech world build software.

But despite its popularity with enterprise users, individual developers and open source projects, GitHub has never turned a profit and chances are that the company decided that an acquisition was preferable over trying to IPO.

GitHub’s main revenue source today is paid accounts, which allows for private repositories and a number of other features that enterprises need, with pricing ranging from $7 per user per month to $21/user/month. Those building public and open source projects can use it for free.

While numerous large enterprises use GitHub as their code sharing service of choice, it also faces quite a bit of competition in this space thanks to products like GitLab and Atlassian’s Bitbucket, as well as a wide range of other enterprise-centric code hosting tools.

Microsoft is acquiring GitHub because it’s a perfect fit for its own ambitions to be the go-to platform for every developer, and every developer need, no matter the platform.

Microsoft has long embraced the Git protocol and is using it in its current Visual Studio Team Services product, which itself used to compete with GitHub’s enterprise service. Knowing GitHub’s position with developers, Microsoft has also leaned on the service quite a bit itself, too and some in the company already claim it is the biggest contributor to GitHub today.

Yet while Microsoft’s stance toward open source has changed over the last few years, many open source developers will keep a very close look at what the company will do with GitHub after the acquisition. That’s because there is a lot of distrust of Microsoft in this cohort, which is understandable given Microsoft’s history.

In fact, TechCrunch received a tip on Friday, which noted not only that the deal had already closed, but that open source software maintainers were already eyeing up alternatives and looking potentially to abandon GitHub in the wake of the deal. Some developers (not just those working in open source) were not wasting timeeven to wait for a confirmation of the deal before migrating.

While GitHub is home to more than just open source software, if such a migration came to pass, it would be a very bad look both for GitHub and Microsoft. And, it would a particularly ironic turn, given the very origins of Git: the versioning control system was created by Linus Torvalds in 2005 when he was working on development of the Linux kernel, in part as a response to a previous system, BitKeeper, changing its terms away from being free to use.

The new Microsoft under CEO Satya Nadella strikes us as a very different company from the Microsoft of ten years ago — especially given that the new Microsoft has embraced open source — but it’s hard to forget its earlier history of trying to suppress Linux.

“Microsoft is a developer-first company, and by joining forces with GitHub we strengthen our commitment to developer freedom, openness and innovation,” said Nadella in today’s announcement. “We recognize the community responsibility we take on with this agreement and will do our best work to empower every developer to build, innovate and solve the world’s most pressing challenges.”

Yet at the same time, it’s worth remembering that Microsoft is now a member of the Linux Foundation and regularly backs a number of open source projects. And Windows now has the Linux subsystem while VS Code, the company’s free code editing tool is open source and available on GitHub, as are .NET Core and numerous other Microsoft-led projects.

And many in the company were defending Microsoft’s commitment to GitHub and its principles, even before the deal was announced.

Still, you can’t help but wonder how Microsoft might leverage GitHub within its wider business strategy, which could see the company build stronger bridges between GitHub and Azure, its cloud hosting service, and its wide array of software and collaboration products. Microsoft is no stranger to ingesting huge companies. One of them, LinkedIn, might be another area where Microsoft might explore synergies, specifically around areas like recruitment and online tutorials and education.

 

Original article here.

 


standard

Google’s AutoML is a Machine Learning Game-Changer

2018-05-24 - By 

Google’s AutoML is a new up-and-coming (alpha stage) cloud software suite of Machine Learning tools. It’s based on Google’s state-of-the-art research in image recognition called Neural Architecture Search (NAS). NAS is basically an algorithm that, given your specific dataset, searches for the most optimal neural network to perform a certain task on that dataset. AutoML is then a suite of machine learning tools that will allow one to easily train high-performance deep networks, without requiring the user to have any knowledge of deep learning or AI; all you need is labelled data! Google will use NAS to then find the best network for your specific dataset and task. They’ve already shown how their methods can achieve performance that is far better than that of hand-designed networks.

AutoML totally changes the whole machine learning game because for many applications, specialised skills and knowledge won’t be required. Many companies only need deep networks to do simpler tasks, such as image classification. At that point they don’t need to hire 5 machine learning PhDs; they just need someone who can handle moving around and organising their data.

There’s no doubt that this shift in how “AI” can be used by businesses will create change. But what kind of change are we looking at? Whom will this change benefit? And what will happen to all of the people jumping into the machine learning field? In this post, we’re going to breakdown what Google’s AutoML, and in general the shift towards Software 2.0, means for both businesses and developers in the machine learning field.

More development, less research for businesses

A lot of businesses in the AI space, especially start-ups, are doing relatively simple things in the context of deep learning. Most of their value is coming from their final put-together product. For example, most computer vision start-ups are using some kind of image classification network, which will actually be AutoML’s first tool in the suite. In fact, Google’s NASNet, which achieves the current state-of-the-art in image classification is already publicly available in TensorFlow! Businesses can now skip over this complex experimental-research part of the product pipeline and just use transfer learning for their task. Because there is less experimental-research, more business resources can be spent on product design, development, and the all important data.

Speaking of which…

It becomes more about product

Connecting from the first point, since more time is being spent on product design and development, companies will have faster product iteration. The main value of the company will become less about how great and cutting edge their research is and more about how well their product/technology is engineered. Is it well designed? Easy to use? Is their data pipeline set up in such a way that they can quickly and easily improve their models? These will be the new key questions for optimising their products and being able to iterate faster than their competition. Cutting edge research will also become less of a main driver of increasing the technology’s performance.

Now it’s more like…

Data and resources become critical

Now that research is a less significant part of the equation, how can companies stand out? How do you get ahead of the competition? Of course sales, marketing, and as we just discussed, product design are all very important. But the huge driver of the performance of these deep learning technologies is your data and resources. The more clean and diverse yet task-targeted data you have (i.e both quality and quantity), the more you can improve your models using software tools like AutoML. That means lots of resources for the acquisition and handling of data. All of this partially signifies us moving away from the nitty-gritty of writing tons of code.

It becomes more of…

Software 2.0: Deep learning becomes another tool in the toolbox for most

All you have to do to use Google’s AutoML is upload your labelled data and boom, you’re all set! For people who aren’t super deep (ha ha, pun) into the field, and just want to leverage the power of the technology, this is big. The application of deep learning becomes more accessible. There’s less coding, more using the tool suite. In fact, for most people, deep learning because just another tool in their toolbox. Andrej Karpathy wrote a great article on Software 2.0 and how we’re shifting from writing lots of code to more design and using tools, then letting AI do the rest.

But, considering all of this…

There’s still room for creative science and research

Even though we have these easy-to-use tools, the journey doesn’t just end! When cars were invented, we didn’t just stop making them better even though now they’re quite easy to use. And there’s still many improvements that can be made to improve current AI technologies. AI still isn’t very creative, nor can it reason, or handle complex tasks. It has the crutch of needing a ton of labelled data, which is both expensive and time consuming to acquire. Training still takes a long time to achieve top accuracy. The performance of deep learning models is good for some simple tasks, like classification, but does only fairly well, sometimes even poorly (depending on task complexity), on things like localisation. We don’t yet even fully understand deep networks internally.

All of these things present opportunities for science and research, and in particular for advancing the current AI technologies. On the business side of things, some companies, especially the tech giants (like Google, Microsoft, Facebook, Apple, Amazon) will need to innovate past current tools through science and research in order to compete. All of them can get lots of data and resources, design awesome products, do lots of sales and marketing etc. They could really use something more to set them apart, and that can come from cutting edge innovation.

That leaves us with a final question…

Is all of this good or bad?

Overall, I think this shift in how we create our AI technologies is a good thing. Most businesses will leverage existing machine learning tools, rather than create new ones since they don’t have a need for it. Near-cutting-edge AI becomes accessible to many people, and that means better technologies for all. AI is also quite an “open” field, with major figures like Andrew Ng creating very popular courses to teach people about this important new technology. Making things more accessible helps people transition with the fast-paced tech field.

Such a shift has happened many times before. Programming computers started with assembly level coding! We later moved on to things like C. Many people today consider C too complicated so they use C++. Much of the time, we don’t even need something as complex as C++, so we just use the super high level languages of Python or R! We use the tool that is most appropriate at hand. If you don’t need something super low-level, then you don’t have to use it (e.g C code optimisation, R&D of deep networks from scratch), and can simply use something more high-level and built-in (e.g Python, transfer learning, AI tools).

At the same time, continued efforts in the science and research of AI technologies is critical. We can definitely add tremendous value to the world by engineering new AI-based products. But there comes a point where new science is needed to move forward. Human creativity will always be valuable.

Conclusion

Thanks for reading! I hope you enjoyed this post and learned something new and useful about the current trend in AI technology! This is a partially opinionated piece, so I’d love to hear any responses you may have below!

Original article here.


standard

Google’s Duplex AI Demo Just Passed the Turing Test (video)

2018-05-11 - By 

Yesterday, at I/O 2018, Google showed off a new digital assistant capability that’s meant to improve your life by making simple boring phone calls on your behalf. The new Google Duplex feature is designed to pretend to be human, with enough human-like functionality to schedule appointments or make similarly inane phone calls. According to Google CEO Sundar Pichai, the phone calls the company played were entirely real. You can make an argument, based on these audio clips, that Google actually passed the Turing Test.

If you haven’t heard the audio of the two calls, you should give the clip a listen. We’ve embedded the relevant part of Pichai’s presentation below.

I suspect the calls were edited to remove the place of business, but apart from that, they sound like real phone calls. If you listen to both segments, the male voice booking the restaurant sounds a bit more like a person than the female does, but the gap isn’t large and the female voice is still noticeably better than a typical AI. The female speaker has a rather robotic “At 12PM” at one point that pulls the overall presentation down, but past that, Google has vastly improved AI speech. I suspect the same technologies at work in Google Duplex are the ones we covered about six weeks ago.

So what’s the Turing Test and why is passing it a milestone? The British computer scientist, mathematician, and philosopher Alan Turing devised the Turing test as a means of measuring whether a computer was capable of demonstrating intelligent behavior equivalent to or indistinguishable from that of a human. This broad formulation allows for the contemplation of many such tests, though the general test case presented in discussion is a conversation between a researcher and a computer in which the computer responds to questions. A third person, the evaluator, is tasked with determining which individual in the conversation is human and which is a machine. If the evaluator cannot tell, the machine has passed the Turing test.

The Turing test is not intended to be the final word on whether an AI is intelligent and, given that Turing conceived it in 1950, obviously doesn’t take into consideration later advances or breakthroughs in the field. There have been robust debates for decades over whether passing the Turing test would represent a meaningful breakthrough. But what sets Google Duplex apart is its excellent mimicry of human speech. The original Turing test supposed that any discussion between computer and researcher would take place in text. Managing to create a voice facsimile close enough to standard human to avoid suspicion and rejection from the company in question is a significant feat.

As of right now, Duplex is intended to handle rote responses, like asking to speak to a representative, or simple, formulaic social interactions. Even so, the program’s demonstrated capability to deal with confusion (as on the second call), is still a significant step forward for these kinds of voice interactions. As artificial intelligence continues to improve, voice quality will improve and the AI will become better at answering more and more types of questions. We’re obviously still a long way from creating a conscious AI, but we’re getting better at the tasks our systems can handle — and faster than many would’ve thought possible.

 

Original article here.

 


standard

F-Secure Hack Can Unlock Millions of Hotel Rooms With Handheld Device

2018-04-27 - By 

It’s very rare these days that a hotel will give you a real key when you check in. Instead, most chain hotels and mid-sized establishments have switched over to electronic locks with a keycard system. As researchers from F-Secure have discovered, these electronic locks may not be very secure. Researchers from the company have managed to create a “master key” for a popular brand of hotel locks that can unlock any door.

The team began this investigation more than a decade ago, when an F-Secure employee had a laptop stolen from a hotel room. Some of the staff began to wonder how easy it would be to hack the keycard locks, so they set out to do it themselves. The researchers are quick to point out this has not been a focus of F-Secure for 10 years — it took several thousand total man-hours, mostly in the last couple years.

F-Secure settled on cracking the Vision by VingCard system built by Swedish lock manufacturer Assa Abloy. These locks are used in more than 42,000 properties in 166 countries. The project was a huge success, too. F-Secure reports they can create a master key in about a minute that unlocks any door in a hotel. That’s millions of potentially vulnerable hotel rooms around the world.

The hack involves a small handheld computer and an RFID reader (it also works with older magnetic stripe cards). All the researchers need to pull off the hack is a keycard from a hotel. It doesn’t even have to be an active one. Even old and invalid cards have the necessary data to reconstruct the keys that unlock doors. The custom software then generates a key with full privileges that can bypass all the locks in a building. Many hotels use these keys not only for guest rooms, but also elevators and employee-only areas of the hotel.

F-Secure disclosed the hack to Assa Abloy last year, and the lock maker developed a software patch to fix the issue. It’s available for customers to download now, but there’s one significant problem. The firmware on each lock needs an update, and there’s no guarantee every hotel with this system will have the resources to do that. Many of them might not even know the vulnerability exists. This hack could work for a long time to come, but F-Secure isn’t making the attack tools generally available. Anyone who wants to compromise these locks will have to start from scratch.

Original article here.

 


standard

Serverless is eating the stack and people are freaking out — as they should be

2018-04-20 - By 

AWS Lambda has stamped a big DEPRECATED on containers – Welcome to “Serverless Superheroes”! 
In this space, I chat with the toolmakers, innovators, and developers who are navigating the brave new world of “serverless” cloud applications.

In this edition, I chatted with Steven Faulkner, a senior software engineer at LinkedIn and the former director of engineering at Bustle. The following interview has been edited and condensed for clarity.

Forrest Brazeal: At Bustle, your previous company, I heard you cut your hosting costs by about forty percent when you switched to serverless. Can you speak to where all that money was going before, and how you were able to make that type of cost improvement?

Steven Faulkner: I believe 40% is where it landed. The initial results were even better than that. We had one service that was costing about $2500 a month and it went down to about $500 a month on Lambda.

Bustle is a media company — it’s got a lot of content, it’s got a lot of viral, spiky traffic — and so keeping up with that was not always the easiest thing. We took advantage of EC2 auto-scaling, and that worked … except when it didn’t. But when we moved to Lambda — not only did we save a lot of money, just because Bustle’s traffic is basically half at nighttime what it is during the day — we saw that serverless solved all these scaling headaches automatically.

On the flip side, did you find any unexpected cost increases with serverless?

There are definitely things that cost more or could be done cheaper not on serverless. When I was at Bustle they were looking at some stuff around data pipelines and settled on not using serverless for that at all, because it would be way too expensive to go through Lambda.

Ultimately, although hosting cost was an interesting thing out of the gate for us, it quickly became a relative non-factor in our move to serverless. It was saving us money, and that was cool, but the draw of serverless really became more about the velocity with which our team could develop and deploy these applications.

At Bustle, we only have ever had one part-time “ops” person. With serverless, those responsibilities get diffused across our team, and that allowed us all to focus more on the application and less on how to get it deployed.

Any of us who’ve been doing serverless for a while know that the promise of “NoOps” may sound great, but the reality is that all systems need care and feeding, even ones you have little control over. How did your team keep your serverless applications running smoothly in production?

I am also not a fan of the term “NoOps”; it’s a misnomer and misleading for people. Definitely out of the gate with serverless, we spent time answering the question: “How do we know what’s going on inside this system?”

IOPipe was just getting off the ground at that time, and so we were one of their very first customers. We were using IOPipe to get some observability, then CloudWatch sort of got better, and X-Ray came into the picture which made things a little bit better still. Since then Bustle also built a bunch of tooling that takes all of the Lambda logs and data and does some transformations — scrubs it a little bit — and sends it to places like DataDog or to Scalyr for analysis, searching, metrics and reporting.

But I’m not gonna lie, I still don’t think it’s super great. It got to the point where it was workable and we could operate and not feel like we were always missing out on what was actually going on, but there’s a lot of room for improvement.

Another common serverless pain point is local development and debugging. How did you handle that?

I wrote a framework called Shep that Bustle still uses to deploy all of our production applications, and it handles the local development piece. It allows you to develop a NodeJS application locally and then deploy it to Lambda. It could do environment variables before Lambda had environment variables, and have some sanity around versioning and using webpack to bundle. All the the stuff that you don’t really want the everyday developer to have to worry about.

I built Shep in my first couple of months at Bustle, and since then, the Serverless Framework has gotten better. SAM has gotten better. The whole entire ecosystem has leveled up. If I was doing it today I probably wouldn’t need to write Shep. But at the time, that’s definitely when we had to do.

You’re putting your finger on an interesting reality with the serverless space, which is: it’s evolving so fast that it’s easy to create a lot of tooling and glue code that becomes obsolete very quickly. Did you find this to be true?

That’s extremely fair to say. I had a little Twitter thread around this a couple months ago, having a bit of a realization myself that Shep is not the way I would do deployments anymore. When AWS releases their own tooling, it always seems to start out pretty bad, so the temptation is to fill in those gaps with your own tool.

But AWS services change and get better at a very rapid rate. So I think the lesson I learned is lean on AWS as much as possible, or build on top of their foundation and make it pluggable in a way that you can just revert to the AWS tooling when it gets better.

Honestly, I don’t envy a lot of the people who sliced their piece of the serverless pie based on some tool they’ve built. I don’t think that’s necessarily a long term sustainable thing.

As I talk to developers and sysadmins, I feel like I encounter a lot of rage about serverless as a concept. People always want to tell me the three reasons why it would never work for them. Why do you think this concept inspires so much animosity and how do you try to change hearts and minds on this?

A big part of it is that we are deprecating so many things at one time. It does feel like a very big step to me compared to something like containers. Kelsey Hightower said something like this at one point: containers enable you to take the existing paradigm and move it forward, whereas serverless is an entirely new paradigm.

And so all these things that people have invented and invested time and money and resources in are just going away, and that’s traumatic, that’s painful. It won’t happen overnight, but anytime you make something that makes people feel like what they’ve maybe spent the last 10 years doing is obsolete, it’s hard. I don’t really know if I have a good way to fix that.

My goal with serverless was building things faster. I’m a product developer; that’s my background, that’s what I like to do. I want to make cool things happen in the world, and serverless allows me to do that better and faster than I can otherwise. So when somebody comes to me and says “I’m upset that this old way of doing things is going away”, it’s hard for me to sympathize.

It sounds like you’re making the point that serverless as a movement is more about business value than it is about technology.

Exactly! But the world is a big tent and there’s room for all kinds of stuff. I see this movement around OpenFaaS and the various Functions as a Service on Kubernetes and I don’t have a particular use for those things, but I can see businesses where they do, and if it helps get people transitioned over to serverless, that’s great.

So what is your definition of serverless, then?

I always joke that “cloud native” would have been a much better term for serverless, but unfortunately that was already taken. I think serverless is really about the managed services. Like, who is responsible for owning whether this thing that my application depends on stays up or not? And functions as a service is just a small piece of that.

The way I describe it is: functions as a service are cloud glue. So if I’m building a model airplane, well, the glue is a necessary part of that process, but it’s not the important part. Nobody looks at your model airplane and says: “Wow, that’s amazing glue you have there.” It’s all about how you craft something that works with all these parts together, and FaaS enables that.

And, as Joe Emison has pointed out, you’re not just limited to one cloud provider’s services, either. I’m a big user of Algolia with AWS. I love using Algolia with Firebase, or Netlify. Serverless is about taking these pieces and gluing them together. Then it’s up to the service provider to really just do their job well. And over time hopefully the providers are doing more and more.

We’re seeing that serverless mindset eat all of these different parts of the stack. Functions as a service was really a critical bit in order to accelerate the process. The next big piece is the database. We’re gonna see a lot of innovation there in the next year. FaunaDB is doing some cool stuff in that area, as is CosmosDB. I believe there is also a missing piece of the market for a Redis-style serverless offering, something that maybe even speaks Redis commands but under the hood is automatically distributed and scalable.

What is a legitimate barrier to companies that are looking to adopt serverless at this point?

Probably the biggest is: how do you deal with the migration of legacy things? At Bustle we ended up mostly re-architecting our entire platform around serverless, and so that’s one option, but certainly not available to everybody. But even then, the first time we launched a serverless service, we brought down all of our Redis instances — because Lambda spun up all these containers and we hit connection limits that you would never expect to hit in a normal app.

So if you’ve got something sitting on a mainframe somewhere that is used to only having 20 connections and then you moved over some upstream service to Lambda and suddenly it has 10,000 connections instead of 20. You’ve got a problem. If you’ve bought into service-oriented architecture as a whole over the last four or five years, then you might have a better time, because you can say “Well, all these things do is talk to each other via an API, so we can replace a single service with serverless functions.”

Any other emerging serverless trends that interest you?

We’ve solved a lot of the easy, low-hanging fruit problems with serverless at this point. Like how you do environment variables, or how you’re gonna structure a repository and enable developers to quickly write these functions, We’re starting to establish some really good best practices.

What’ll happen next is we’ll get more iteration around architecture. How do I glue these four services together, and how do the Lambda functions look that connect them? We don’t yet have the Rails of serverless — something that doesn’t necessarily expose that it’s actually a Lambda function under the hood. Maybe it allows you to write a bunch of functions in one file that all talk to each other, and then use something like webpack that splits those functions and deploys them in a way that makes sense for your application.

We could even respond to that at runtime. You could have an application that’s actually looking at what’s happening in the code and saying: “Wow this one part of your code is taking a long time to run; we should make that its own Lambda function and we should automatically deploy that and set up this SNS trigger for you.” That’s all very pie in the sky, but I think we’re not that far off from having these tools.

Because really, at the end of the day, as a developer I don’t care about Lambda, right? I mean, I have to care right now because it’s the layer in which I work, but if I can move one layer up where I’m just writing business logic and the code gets split up appropriately, that’s real magic.


Forrest Brazeal is a cloud architect and serverless community advocate at Trek10. He writes the Serverless Superheroes series and draws the ‘FaaS and Furious’ cartoon series at A Cloud Guru. If you have a serverless story to tell, please don’t hesitate to let him know.

Original article here.

 


standard

The CIA Just Lost Control of Its Hacking Arsenal. Here’s What You Need to Know.

2018-04-09 - By 

WikiLeaks just released internal documentation of the CIA’s massive arsenal of hacking tools and techniques. These 8,761 documents — called “Vault 7” — show how their operatives can remotely monitor and control devices, such as phones, TVs, and cars.

And what’s worse, this archive of techniques seems to be out in the open, where all manner of hackers can use it to attack us.

“The CIA lost control of the majority of its hacking arsenal including malware, viruses, trojans, weaponized “zero day” exploits, malware remote control systems and associated documentation. This extraordinary collection, which amounts to more than several hundred million lines of code, gives its possessor the entire hacking capacity of the CIA.” — WikiLeaks

WikiLeaks has chosen not to publish the malicious code itself “until a consensus emerges on… how such ‘weapons’ should be analyzed, disarmed and published.”

But this has laid bare just how many people are aware of these devastating hacking techniques.

“This archive appears to have been circulated among former U.S. government hackers and contractors in an unauthorized manner, one of whom has provided WikiLeaks with portions of the archive.” — WikiLeaks

Disturbingly, these hacks were bought or stolen from other countries’ intelligence agencies, and instead of closing these vulnerabilities, the government put everyone at risk by intentionally keeping them open.

“[These policy decisions] urgently need to be debated in public, including whether the CIA’s hacking capabilities exceed its mandated powers and the problem of public oversight of the agency.” — the operative who leaked the data

First, I’m going to break down three takeaways from today’s Vault 7 release that every American citizen should be aware of. Then I’ll give you actionable advice for how you can protect yourself from this illegal overreach by the US government — and from the malicious hackers the government has empowered through its own recklessness.

Takeaway #1: If you drive an internet-connected car, hackers can crash it into a concrete wall and kill you and your family.

I know, this sounds crazy, but it’s real.

“As of October 2014 the CIA was also looking at infecting the vehicle control systems used by modern cars and trucks. The purpose of such control is not specified, but it would permit the CIA to engage in nearly undetectable assassinations.” — WikiLeaks

We’ve known for a while that internet-connected cars could be hacked. But we had no idea of the scope of this until today.

Like other software companies, car manufacturers constantly patch vulnerabilities as they discover them. So if you have an internet-connected car, always update to the latest version of its software.

As Wikileaks makes more of these vulnerabilities public, car companies should be able to quickly patch them and release security updates.

Takeaway #2: It doesn’t matter how secure an app is — if the operating system it runs on gets hacked, the app is no longer secure.

Since the CIA (and probably lots of other organizations, now) know how to compromise your iOS and Android devices, they can intercept data before it even reaches the app. This means they can grab your unencrypted input (microphone, keystrokes) before Signal or WhatsApp can encrypt it.

One important way to reduce the impact of these exploits is to open source as much of this software as possible.

“Proprietary software tends to have malicious features. The point is with a proprietary program, when the users don’t have the source code, we can never tell. So you must consider every proprietary program as potential malware.” — Richard Stallman, founder of the GNU Project

You may be thinking — isn’t Android open source? Its core is open source, but Google and handset manufacturers like Samsung are increasingly adding closed-source code on top of this. In doing so, they’re opening themselves up to more ways of getting hacked. When code is closed source, there’s not much the developer community can do to help them.

“There are two types of companies: those who have been hacked, and those who don’t yet know they have been hacked.” — John Chambers, former CEO of Cisco

By open-sourcing more of the code, the developer community will be able to discover and patch these vulnerabilities much faster.

Takeaway #3: Just because a device looks like it’s turned off doesn’t mean it’s really turned off.

One of the most disturbing exploits involves making Smart TVs look like they’re turned off, but actually leaving their microphones on. People all around the world are literally bugging their own homes with these TVs.

The “fake-off” mode is part of the “Weeping Angel” exploit:

“The attack against Samsung smart TVs was developed in cooperation with the United Kingdom’s MI5/BTSS. After infestation, Weeping Angel places the target TV in a ‘Fake-Off’ mode, so that the owner falsely believes the TV is off when it is on. In ‘Fake-Off’ mode the TV operates as a bug, recording conversations in the room and sending them over the Internet to a covert CIA server.” — Vault 7 documents

The leaked CIA documentation shows how hackers can turn off LEDs to make a device look like it’s off.

You know that light that turns on whenever your webcam is recording? That can be turned off, too. Even the director of the FBI — the same official who recently paid hackers a million dollars to unlock a shooter’s iPhone — is encouraging everyone to cover their webcams.

Just like how you should always treat a gun as though it were loaded, you should always treat a microphone as though it were recording.

What can you do about all this?

It’s not clear how badly all of these devices are compromised. Hopefully Apple, Google, and other companies will quickly patch these vulnerabilities as they are made public.

There will always be new vulnerabilities. No software application will ever be completely secure. We must to continue to be vigilant.

Here’s what you should do:

  1. Don’t despair. You should still do everything you can to protect yourself and your family.
  2. Educate yourself on cybersecurity and cyberwarfare. This is the best book on the topic.
  3. Take a moment to read my guide on how to encrypt your entire life in less than an hour.

Thanks for reading. And a special thanks to Steve Phillips for helping review and fact-check this article.

Original article here.

 


standard

The Difference Between Artificial Intelligence, Machine Learning, and Deep Learning

2018-04-08 - By 

Simple explanations of Artificial Intelligence, Machine Learning, and Deep Learning and how they’re all different. Plus, how AI and IoT are inextricably connected.

We’re all familiar with the term “Artificial Intelligence.” After all, it’s been a popular focus in movies such as The Terminator, The Matrix, and Ex Machina (a personal favorite of mine). But you may have recently been hearing about other terms like “Machine Learning” and “Deep Learning,” sometimes used interchangeably with artificial intelligence. As a result, the difference between artificial intelligence, machine learning, and deep learning can be very unclear.

I’ll begin by giving a quick explanation of what Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL) actually mean and how they’re different. Then, I’ll share how AI and the Internet of Things are inextricably intertwined, with several technological advances all converging at once to set the foundation for an AI and IoT explosion.

So what’s the difference between AI, ML, and DL?

First coined in 1956 by John McCarthy, AI involves machines that can perform tasks that are characteristic of human intelligence. While this is rather general, it includes things like planning, understanding language, recognizing objects and sounds, learning, and problem solving.

We can put AI in two categories, general and narrow. General AI would have all of the characteristics of human intelligence, including the capacities mentioned above. Narrow AI exhibits some facet(s) of human intelligence, and can do that facet extremely well, but is lacking in other areas. A machine that’s great at recognizing images, but nothing else, would be an example of narrow AI.

At its core, machine learning is simply a way of achieving AI.

Arthur Samuel coined the phrase not too long after AI, in 1959, defining it as, “the ability to learn without being explicitly programmed.” You see, you can get AI without using machine learning, but this would require building millions of lines of codes with complex rules and decision-trees.

So instead of hard coding software routines with specific instructions to accomplish a particular task, machine learning is a way of “training” an algorithm so that it can learnhow. “Training” involves feeding huge amounts of data to the algorithm and allowing the algorithm to adjust itself and improve.

To give an example, machine learning has been used to make drastic improvements to computer vision (the ability of a machine to recognize an object in an image or video). You gather hundreds of thousands or even millions of pictures and then have humans tag them. For example, the humans might tag pictures that have a cat in them versus those that do not. Then, the algorithm tries to build a model that can accurately tag a picture as containing a cat or not as well as a human. Once the accuracy level is high enough, the machine has now “learned” what a cat looks like.

Deep learning is one of many approaches to machine learning. Other approaches include decision tree learning, inductive logic programming, clustering, reinforcement learning, and Bayesian networks, among others.

Deep learning was inspired by the structure and function of the brain, namely the interconnecting of many neurons. Artificial Neural Networks (ANNs) are algorithms that mimic the biological structure of the brain.

In ANNs, there are “neurons” which have discrete layers and connections to other “neurons”. Each layer picks out a specific feature to learn, such as curves/edges in image recognition. It’s this layering that gives deep learning its name, depth is created by using multiple layers as opposed to a single layer.

AI and IoT are Inextricably Intertwined

I think of the relationship between AI and IoT much like the relationship between the human brain and body.

Our bodies collect sensory input such as sight, sound, and touch. Our brains take that data and makes sense of it, turning light into recognizable objects and turning sounds into understandable speech. Our brains then make decisions, sending signals back out to the body to command movements like picking up an object or speaking.

All of the connected sensors that make up the Internet of Things are like our bodies, they provide the raw data of what’s going on in the world. Artificial intelligence is like our brain, making sense of that data and deciding what actions to perform. And the connected devices of IoT are again like our bodies, carrying out physical actions or communicating to others.

Unleashing Each Other’s Potential

The value and the promises of both AI and IoT are being realized because of the other.

Machine learning and deep learning have led to huge leaps for AI in recent years. As mentioned above, machine learning and deep learning require massive amounts of data to work, and this data is being collected by the billions of sensors that are continuing to come online in the Internet of Things. IoT makes better AI.

Improving AI will also drive adoption of the Internet of Things, creating a virtuous cycle in which both areas will accelerate drastically. That’s because AI makes IoT useful.

On the industrial side, AI can be applied to predict when machines will need maintenance or analyze manufacturing processes to make big efficiency gains, saving millions of dollars.

On the consumer side, rather than having to adapt to technology, technology can adapt to us. Instead of clicking, typing, and searching, we can simply ask a machine for what we need. We might ask for information like the weather or for an action like preparing the house for bedtime (turning down the thermostat, locking the doors, turning off the lights, etc.).

Converging Technological Advancements Have Made this Possible

Shrinking computer chips and improved manufacturing techniques means cheaper, more powerful sensors.

Quickly improving battery technology means those sensors can last for years without needing to be connected to a power source.

Wireless connectivity, driven by the advent of smartphones, means that data can be sent in high volume at cheap rates, allowing all those sensors to send data to the cloud.

And the birth of the cloud has allowed for virtually unlimited storage of that data and virtually infinite computational ability to process it.

Of course, there are one or two concerns about the impact of AI on our society and our future. But as advancements and adoption of both AI and IoT continue to accelerate, one thing is certain; the impact is going to be profound.

 

Original article here.


standard

Google Publishes a JavaScript Style Guide. Here are Key Lessons.

2018-03-30 - By 

For anyone who isn’t already familiar with it, Google puts out a style guide for writing JavaScript that lays out (what Google believes to be) the best stylistic practices for writing clean, understandable code.

These are not hard and fast rules for writing valid JavaScript, only proscriptions for maintaining consistent and appealing style choices throughout your source files. This is particularly interesting for JavaScript, which is a flexible and forgiving language that allows for a wide variety of stylistic choices.

Google and Airbnb have two of the most popular style guides out there. I’d definitely recommend you check out both of them if you spend much time writing JS.

The following are thirteen of what I think are the most interesting and relevant rules from Google’s JS Style Guide.

They deal with everything from hotly contested issues (tabs versus spaces, and the controversial issue of how semicolons should be used), to a few more obscure specifications which surprised me. They will definitely change the way I write my JS going forward.

For each rule, I’ll give a summary of the specification, followed by a supporting quote from the style guide that describes the rule in detail. Where applicable, I’ll also provide an example of the style in practice, and contrast it with code that does not follow the rule.

Use spaces, not tabs

Aside from the line terminator sequence, the ASCII horizontal space character (0x20) is the only whitespace character that appears anywhere in a source file. This implies that… Tab characters are not used for indentation.

The guide later specifies you should use two spaces (not four) for indentation.

// bad
function foo() {
∙∙∙∙let name;
}

// bad
function bar() {
∙let name;
}

// good
function baz() {
∙∙let name;
}

Semicolons ARE required

Every statement must be terminated with a semicolon. Relying on automatic semicolon insertion is forbidden.

Although I can’t imagine why anyone is opposed to this idea, the consistent use of semicolons in JS is becoming the new ‘spaces versus tabs’ debate. Google’s coming out firmly here in the defence of the semicolon.

// bad
let luke = {}
let leia = {}
[luke, leia].forEach(jedi => jedi.father = 'vader')
// good
let luke = {};
let leia = {};
[luke, leia].forEach((jedi) => {
  jedi.father = 'vader';
});

Don’t use ES6 modules (yet)

Do not use ES6 modules yet (i.e. the export and importkeywords), as their semantics are not yet finalized. Note that this policy will be revisited once the semantics are fully-standard.

// Don't do this kind of thing yet:
//------ lib.js ------
export function square(x) {
 return x * x;
}
export function diag(x, y) {
 return sqrt(square(x) + square(y));
}

//------ main.js ------
import { square, diag } from 'lib';

Horizontal alignment is discouraged (but not forbidden)

This practice is permitted, but it is generally discouraged by Google Style. It is not even required to maintain horizontal alignment in places where it was already used.

Horizontal alignment is the practice of adding a variable number of additional spaces in your code, to make certain tokens appear directly below certain other tokens on previous lines.

// bad
{
  tiny:   42,  
  longer: 435, 
};
// good
{
  tiny: 42, 
  longer: 435,
};

Don’t use var anymore

Declare all local variables with either const or let. Use const by default, unless a variable needs to be reassigned. The var keyword must not be used.

I still see people using var in code samples on StackOverflow and elsewhere. I can’t tell if there are people out there who will make a case for it, or if it’s just a case of old habits dying hard.

// bad
var example = 42;
// good
let example = 42;

Arrow functions are preferred

Arrow functions provide a concise syntax and fix a number of difficulties with this. Prefer arrow functions over the function keyword, particularly for nested functions

I’ll be honest, I just thought that arrow functions were great because they were more concise and nicer to look at. Turns out they also serve a pretty important purpose.

// bad
[1, 2, 3].map(function (x) {
  const y = x + 1;
  return x * y;
});

// good
[1, 2, 3].map((x) => {
  const y = x + 1;
  return x * y;
});

Use template strings instead of concatenation

Use template strings (delimited with `) over complex string concatenation, particularly if multiple string literals are involved. Template strings may span multiple lines.

// bad
function sayHi(name) {
  return 'How are you, ' + name + '?';
}

// bad
function sayHi(name) {
  return ['How are you, ', name, '?'].join();
}

// bad
function sayHi(name) {
  return `How are you, ${ name }?`;
}

// good
function sayHi(name) {
  return `How are you, ${name}?`;
}

Don’t use line continuations for long strings

Do not use line continuations (that is, ending a line inside a string literal with a backslash) in either ordinary or template string literals. Even though ES5 allows this, it can lead to tricky errors if any trailing whitespace comes after the slash, and is less obvious to readers.

Interestingly enough, this is a rule that Google and Airbnb disagree on (here’s Airbnb’s spec).

While Google recommends concatenating longer strings (as shown below) Airbnb’s style guide recommends essentially doing nothing, and allowing long strings to go on as long as they need to.

// bad (sorry, this doesn't show up well on mobile)
const longString = 'This is a very long string that \
    far exceeds the 80 column limit. It unfortunately \
    contains long stretches of spaces due to how the \
    continued lines are indented.';
// good
const longString = 'This is a very long string that ' + 
    'far exceeds the 80 column limit. It does not contain ' + 
    'long stretches of spaces since the concatenated ' +
    'strings are cleaner.';

“for… of” is the preferred type of ‘for loop’

With ES6, the language now has three different kinds of forloops. All may be used, though forof loops should be preferred when possible.

This is a strange one if you ask me, but I thought I’d include it because it is pretty interesting that Google declares a preferred type of for loop.

I was always under the impression that for... in loops were better for objects, while for... of were better suited to arrays. A ‘right tool for the right job’ type situation.

While Google’s specification here doesn’t necessarily contradict that idea, it is still interesting to know they have a preference for this loop in particular.

Don’t use eval()

Do not use eval or the Function(...string) constructor (except for code loaders). These features are potentially dangerous and simply do not work in CSP environments.

The MDN page for eval() even has a section called “Don’t use eval!”

// bad
let obj = { a: 20, b: 30 };
let propName = getPropName();  // returns "a" or "b"
eval( 'var result = obj.' + propName );
// good
let obj = { a: 20, b: 30 };
let propName = getPropName();  // returns "a" or "b"
let result = obj[ propName ];  //  obj[ "a" ] is the same as obj.a

Constants should be named in ALL_UPPERCASE separated by underscores

Constant names use CONSTANT_CASE: all uppercase letters, with words separated by underscores.

If you’re absolutely sure that a variable shouldn’t change, you can indicate this by capitalizing the name of the constant. This makes the constant’s immutability obvious as it gets used throughout your code.

A notable exception to this rule is if the constant is function-scoped. In this case it should be written in camelCase.

// bad
const number = 5;
// good
const NUMBER = 5;

One variable per declaration

Every local variable declaration declares only one variable: declarations such as let a = 1, b = 2; are not used.

// bad
let a = 1, b = 2, c = 3;
// good
let a = 1;
let b = 2;
let c = 3;

Use single quotes, not double quotes

Ordinary string literals are delimited with single quotes ('), rather than double quotes (").

Tip: if a string contains a single quote character, consider using a template string to avoid having to escape the quote.

// bad
let directive = "No identification of self or mission."
// bad
let saying = 'Say it ain\u0027t so.';
// good
let directive = 'No identification of self or mission.';
// good
let saying = `Say it ain't so`;

A final note

As I said in the beginning, these are not mandates. Google is just one of many tech giants, and these are just recommendations.

That said, it is interesting to look at the style recommendations that are put out by a company like Google, which employs a lot of brilliant people who spend a lot of time writing excellent code.

You can follow these rules if you want to follow the guidelines for ‘Google compliant source code’ — but, of course, plenty of people disagree, and you’re free to brush any or all of this off.

I personally think there are plenty of cases where Airbnb’s spec is more appealing than Google’s. No matter the stance you take on these particular rules, it is still important to keep stylistic consistency in mind when write any sort of code.

Original article here.


standard

Why SQL is beating NoSQL, and what this means for the future of data

2018-03-29 - By 

After years of being left for dead, SQL today is making a comeback. How come? And what effect will this have on the data community?

Since the dawn of computing, we have been collecting exponentially growing amounts of data, constantly asking more from our data storage, processing, and analysis technology. In the past decade, this caused software developers to cast aside SQL as a relic that couldn’t scale with these growing data volumes, leading to the rise of NoSQL: MapReduce and Bigtable, Cassandra, MongoDB, and more.

Yet today SQL is resurging. All of the major cloud providers now offer popular managed relational database services: e.g., Amazon RDSGoogle Cloud SQLAzure Database for PostgreSQL (Azure launched just this year). In Amazon’s own words, its PostgreSQL- and MySQL-compatible database Aurora database product has been the “fastest growing service in the history of AWS”. SQL interfaces on top of Hadoop and Spark continue to thrive. And just last month, Kafka launched SQL support. Your humble authors themselves are developers of a new time-series database that fully embraces SQL.

In this post we examine why the pendulum today is swinging back to SQL, and what this means for the future of the data engineering and analysis community.


Part 1: A New Hope

To understand why SQL is making a comeback, let’s start with why it was designed in the first place.

Our story starts at IBM Research in the early 1970s, where the relational database was born. At that time, query languages relied on complex mathematical logic and notation. Two newly minted PhDs, Donald Chamberlin and Raymond Boyce, were impressed by the relational data model but saw that the query language would be a major bottleneck to adoption. They set out to design a new query language that would be (in their own words): “more accessible to users without formal training in mathematics or computer programming.”

Read the Full article here.

 


standard

Tools I Wish I Had Known About When I Started Coding

2018-03-15 - By 

In the tech world, there are thousands of tools that people will tell you to use. How are you supposed to know where to start?

As somebody who started coding relatively recently, this downpour of information was too much to sift through. I found myself installing extensions that did not really help me in my development cycle, and often even got in the way of it.

I am by no means an expert, but over time I have compiled a list of tools that have proven extremely useful to me. If you are just starting to learn how to program, this will hopefully offer you some guidance. If you are a seasoned developer, hopefully you will still learn something new.

I am going to break this article up into Chrome Extensions and VS Code extensions. I know there are other browsers and other text editors, but I am willing to bet most of the tools are also available for your platform of choice, so let’s not start a religious argument over our personal preferences.

Feel free to jump around.

Chrome Extensions

Now that I am a self-proclaimed web developer, I practically live in my Chrome console. Below are some tools that allow me to spend less time there:

  • WhatFont — The name says it all. This is an easy way of finding out the fonts that your favorite website is using, so that you can borrow them for your own projects.
  • Pesticide — Useful for seeing the outlines of your <div>s and modifying CSS. This was a lifesaver when I was trying to learn my way around the box-model.
  • Colorzilla — Useful for copying exact colors off of a website. This copies a color straight to your clipboard so you don’t spend forever trying to get the right RGBA combination.
  • CSS Peeper — Useful for looking at colors and assets used on a website. A good exercise, especially when starting out, is cloning out websites that you think look cool. This gives you a peek under the hood at their color scheme and allows you to see what other assets exist on their page.
  • Wappalyzer — Useful for seeing the technologies being used on a website. Ever wonder what kind of framework a website is using or what service it is hosted on? Look no further.
  • React Dev Tools — Useful for debugging your React applications. It bears mentioning that this is only useful if you are programming a React application.
  • Redux Dev Tools — Useful for debugging applications using Redux. It bears mentioning that this is only useful if you are implementing Redux in your application.
  • JSON Formatter — Useful for making JSON look cleaner in the browser. Have you ever stared an ugly JSON blob in the face, trying to figure out how deeply nested the information you want is? Well this makes it so that it only takes 2 hours instead of 3.
  • Vimeo Repeat and Speed — Useful for speeding up Vimeo videos. If you watch video tutorials like most web developers, you know how handy it is to consume them at 1.25 times the regular playback speed. There are also versions for YouTube.

VS Code Extensions

Visual Studio Code is my editor of choice.

People love their text editors, and I am no exception. However, I’m willing to bet most of these extensions work for whatever editor you are using as well. Check out my favorite extensions:

  • Auto Rename Tag — Auto rename paired HTML tags. You created a <p>tag. Now you want to change it, as well as its enclosing </p> tag to something else. Simply change one and the other will follow. Theoretically improves your productivity by a factor of 2.
  • HTML CSS Support — CSS support for HTML documents. This is useful for getting some neat syntax highlighting and code suggestions so that CSS only makes you want to quit coding a couple of times a day.
  • HTML Snippets — Useful code snippets. Another nice time saver. Pair this with Emmet and you barely ever have to type real HTML again.
  • Babel ES6/ES7 — Adds JavaScript Babel syntax coloring. If you are using Babel, this will make it much easier to differentiate what is going on in your code. This is neat if you like to play with modern features of JavaScript.
  • Bracket Pair Colorizer — Adds colors to brackets for easier block visualization. This is handy for those all-too-common bugs where you didn’t close your brackets or parentheses accurately.
  • ESLint — Integrates ESLint into Visual Studio Code. This is handy for getting hints about bugs as you are writing your code and, depending on your configuration, it can help enforce good coding style.
  • Guides — Adds extra guide lines to code. This is another visual cue to make sure that you are closing your brackets correctly. If you can’t tell, I’m a very visual person.
  • JavaScript Console Utils — Makes for easier console logging. If you are like most developers, you will find yourself logging to the console in your debugging flow (I know that we are supposed to use the debugger). This utility makes it easy to create useful console.log() statements.
  • Code Spell Checker — Spelling checker that accounts for camelCase. Another common source of bugs is fat-thumbing a variable or function name. This spell checker will look for uncommon words and is good about accounting for the way we write things in JavaScript.
  • Git Lens — Makes it easier to see when, and by whom, changes were made. This is nice for blaming the appropriate person when code gets broken, since it is absolutely never your fault.
  • Path Intellisense — File path autocompletion. This is super handy for importing things from other files. It makes navigating your file tree a breeze.
  • Prettier — Automatic code formatter. Forget about the days where you had to manually indent your code and make things human-legible. Prettier will do this for you much faster, and better, than you ever could on your own. I can’t recommend this one enough.
  • VSCode-Icons — Adds icons to the file tree. If looking at your file structure hurts your eyes, this might help. There is a helpful icon for just about any kind of file you are making which will make it easier to distinguish what you are looking at.

In Conclusion

You likely have your own set of tools that are indispensable to your development cycle. Hopefully some of the tools I mentioned above can make your workflow more efficient.

Do not fall into the trap, however, of installing every tool you run across before learning to use the ones you already have, as this can be a huge time-sink.

I encourage you to leave your favorite tools in the comments below here, so that we can all learn together.

If you liked this article please give it some claps and check out other articles I’ve written hereherehere, and here. Also, give me a follow on Twitter.

Original article here.

 

 

 

 


standard

Lies Spread Faster than the Truth

2018-03-09 - By 

There is worldwide concern over false news and the possibility that it can influence political, economic, and social well-being. To understand how false news spreads, Vosoughi et al. used a data set of rumor cascades on Twitter from 2006 to 2017. About 126,000 rumors were spread by ∼3 million people. False news reached more people than the truth; the top 1% of false news cascades diffused to between 1000 and 100,000 people, whereas the truth rarely diffused to more than 1000 people. Falsehood also diffused faster than the truth. The degree of novelty and the emotional reactions of recipients may be responsible for the differences observed.

See full article here.

 


standard

Two GIANT DDoS Attacks in Less Than a Week

2018-03-07 - By 

In a post on its engineering blog, Github said, “The attack originated from over a thousand
different autonomous systems (ASNs) across tens of thousands of unique endpoints. It was an
amplification attack using the memcached-based approach that peaked at 1.35Tbps via 126.9
million packets per second.” Then, yesterday, Arbor Networks announced that record had been broken by a 1.7 TB attack!

Here is a TWiT Netcast discussing the attacks:


standard

Startup Success Stats

2018-02-22 - By 

We often hear that 9/10 businesses fail within the first 18 months. We can agree that’s really not the most motivating statistic!

But, thank you for bringing up the subject because launching your startup isn’t investing in a lost cause (I speak from my personal experience at Linkilaw).

  • A startup IS profitable

If you had any doubt, YES! a start-up can be profitable! Studies showed that “the combined annual turnover for SMEs in 2016 was £1.9 trillion”40% of small businesses are profitable, so running a business more than 18 months is definitely possible! From my point of view, I saw hundreds of entrepreneurs successfully launching their business… and still operating!

  • “You should not work in startups”

One advice I regularly discuss with my team: DON’T listen to what your friends and family say about working in startups! In 2016, “total employment in SMEs was 16.1 million”. Also, “European startups create on-average 12,9 jobs in the first 2,5 years”.

  • A successful startup: a magical formula?

I’m sure other entrepreneurs out here will agree creating your own business is a long journey… But, it won’t change that “having two founders will raise 30% more investment, grow your customers 3 times as fast”.

As everywhere, some areas work better than others… so if you’re still wondering where to go, the highest success rates are in finance, insurance and real estate“58% of these businesses are still operating after 4 years”. It doesn’t mean that you cannot innovate anywhere else though!

  • And failure?

We won’t build a magical world without being realistic. If you do meet failure (because success isn’t an exact science in this case!), remember that “20% of founders who have failed on their first startup, succeed on their second”.

So, don’t take failure as a dead end, keep going and learn from your experiences! You will never know where the small idea coming to your head will lead you. I love this quote by Jeff Bezos (founder of Amazon) which really stuck with me, especially in moments of darkness that come with creating the business:

“I knew that if I failed I wouldn’t regret that, but I knew the one thing I might regret is not trying”.

Above all, I think that making your startup survive more than 18 months is about managing its journey and being sure that you operate gradually.

If you’re unsure of what you should do to launch a solid successful business and what is legally required at each step, Linkilaw offers a Startup Legal Session(It’s free!).

 

Original article found on Quora.


standard

Intel’s New Quantum Computing Breakthrough Using Silicon

2018-02-20 - By 

Silicon has been an integral part of computing for decades. While it’s not always the best solution by every metric, it’s captured a key set of capabilities that make it well suited for general (or classical) computing. When it comes to quantum computing, however, silicon-based solutions haven’t really been adopted.

Historically, silicon qubits have been shunned for two reasons: It’s difficult to control qubits manufactured on silicon, and it’s never been clear if silicon qubits could scale as well as other solutions. D-Wave’s quantum annealer is up to 2,048 qubits, and recently added a reverse quantum annealing capability, while IBM demonstrated a 50 qubit quantum computer last month. Now Intel is throwing its own hat into the ring with a new type of qubit known as a “spin qubit,” produced on conventional silicon.

Note: This is a fundamentally different technology than the quantum computing research Intel unveiled earlier this year. The company is proceeding along parallel tracks, developing a more standard quantum computer alongside its own silicon-based efforts.

Here’s how Intel describes this technology:

Spin qubits highly resemble the semiconductor electronics and transistors as we know them today. They deliver their quantum power by leveraging the spin of a single electron on a silicon device and controlling the movement with tiny, microwave pulses.

The company has published an informative video about the technology, available below:

As for why Intel is pursuing spin qubits as opposed to the approach IBM has taken, there are several reasons. First and foremost, Intel is heavily invested in the silicon industry — far more so than any other firm working on quantum computing. IBM sold its fabs to GlobalFoundries. No one, to the best of our knowledge, is building quantum computers at pure-play foundries like TSMC. Intel’s expertise is in silicon and the company is still one of the foremost foundries in the world.

But beyond that, there are benefits to silicon qubits. Silicon qubits are smaller than conventional qubits, and they are expected to hold coherence for a longer period of time. This could be critical to efforts to scale quantum computing systems upwards. While its initial test chips have held a temperature of 20 millikelvin, Intel believes it can scale its design up to an operating temperature of 1 kelvin. That gap might not seem like much, but Intel claims it’s critical to long-term qubit scaling. Moving up to 1K reduces the amount of cooling equipment that must be packed between each qubit, and allows more qubits to pack into a smaller amount of space.

Intel is already moving towards having a functional spin qubit system. The company has prototyped a “spin qubit fabrication flow on its 300 mm process technology,” using isotopically pure wafers sourced for producing spin-qubit test chips:

Fabricated in the same facility as Intel’s advanced transistor technologies, Intel is now testing the initial wafers. Within a couple of months, Intel expects to be producing many wafers per week, each with thousands of small qubit arrays.

If silicon spin qubits can be built in large quantities, and the benefits Intel expects materialize, it could be a game-changing event for quantum computing. Building these chips in bulk and packing qubits more tightly together could make it possible to scale up qubit production relatively quickly.

Original article here.

 


standard

Six Top Cloud Security Threats in 2018

2018-02-14 - By 

2018 is set to be a very exciting year for cloud computing. In the fourth financial quarter of 2017, Amazon, SAP, Microsoft, IBM, Salesforce, Oracle, and Google combined had over $22 billion in their revenue from cloud services. Cloud services will only get bigger in 2018. It’s easy to understand why businesses love the cloud. It’s easier and more affordable to use third-party cloud services than for every enterprise to have to maintain their own datacenters on their own premises.

It’s certainly possible to keep your company’s data on cloud servers secure. But cyber threats are evolving, and cloud servers are a major target. Keep 2018’s top cloud security threats in mind, and you’ll have the right mindset for properly securing your business’ valuable data.

1. Data Breaches

2017 was a huge year for data breaches. Even laypeople to the cybersecurity world heard about September’s Equifax breach because it affected at least 143 million ordinary people. Breaches frequently happen to cloud data, as well.

In May 2017, a major data breach that hit OneLogin was discovered. OneLogin provides identity management and single sign-on capabilities for the cloud services of over 2,000 companies worldwide.

“Today we detected unauthorized access to OneLogin data in our US data region. We have since blocked this unauthorized access, reported the matter to law enforcement, and are working with an independent security firm to determine how the unauthorized access happened and verify the extent of the impact of this incident. We want our customers to know that the trust they have placed in us is paramount,” said OneLogin CISO Alvaro Hoyos.

Over 1.4 billion records were lost to data breaches in March 2017 alone, many of which involved cloud servers.

2. Data loss

Sometimes data lost from cloud servers is not due to cyber attack. Non-malicious causes of data loss include natural disasters like floods and earthquakes and simple human error, such as when a cloud administrator accidentally deletes files. Threats to your cloud data don’t always look like clever kids wearing hoodies. It’s easy to underestimate the risk of something bad happening to your data due to an innocent mistake.

One of the keys to mitigating the non-malicious data loss threat is to maintain lots of backups at physical sites at different geographic locations.

3. Insider threats

Insider threats to cloud security are also underestimated. Most employees are trustworthy, but a rogue cloud service employee has a lot of access that an outside cyber attacker would have to work much harder to acquire.

From a whitepaper by security researchers William R Claycomb and Alex Nicoll:

“Insider threats are a persistent and increasing problem. Cloud computing services provide a resource for organizations to improve business efficiency, but also expose new possibilities for insider attacks. Fortunately, it appears that few, if any, rogue administrator attacks have been successful within cloud service providers, but insiders continue to abuse organizational trust in other ways, such as using cloud services to carry out attacks. Organizations should be aware of vulnerabilities exposed by the use of cloud services and mindful of the availability of cloud services to employees within the organization. The good news is that existing data protection techniques can be effective, if diligently and carefully applied.”

4. Denial of Service attacks

Denial of service (DoS) attacks are pretty simple for cyber attackers to execute, especially if they have control of a botnet. Also, DDoS-as-a-service is growing in popularity on the Dark Web. Now attackers don’t need know-how and their own bots; all they have to do is transfer some of their cryptocurrency in order to buy a Dark Web service.

Denis Makrushin wrote for Kaspersky Lab:

“Ordering a DDoS attack is usually done using a full-fledged web service, eliminating the need for direct contact between the organizer and the customer. The majority of offers that we came across left links to these resources rather than contact details. Customers can use them to make payments, get reports on work done or utilize additional services. In fact, the functionality of these web services looks similar to that offered by legal services.”

An effective DDoS attack on a cloud service gives a cyber attacker the time they need to execute other types of cyber attacks without getting caught.

5. Spectre and Meltdown

This is a new addition to the list of known cloud security threats for 2018. The Meltdown and Spectre speculative execution vulnerabilities also affect CPUs that are used by cloud services. Spectre is especially difficult to patch.

From CSO Online:

“Both Spectre and Meltdown permit side-channel attacks because they break down the isolation between applications. An attacker that is able to access a system through unprivileged log in can read information from the kernel, or attackers can read the host kernel if they are a root user on a guest virtual machine (VM).

This is a huge issue for cloud service providers. While patches are becoming available, they only make it harder to execute an attack. The patches might also degrade performance, so some businesses might choose to leave their systems unpatched. The CERT Advisory is recommending the replacement of all affected processors—tough to do when replacements don’t yet exist.”

6. Insecure APIs

Application Programming Interfaces are important software components for cloud services. In many cloud systems, APIs are the only facets outside of the trusted organizational boundary with a public IP address. Exploiting a cloud API gives cyber attackers considerable access to your cloud applications. This is a huge problem!

Cloud APIs represent a public front door to your applications. Secure them very carefully.

To learn more about maintaining control in your cloud environment, click here.

Original article here.

 


standard

35 Amazing Examples Of How Blockchain Is Changing Our World

2018-02-13 - By 

It’s quickly becoming apparent that blockchain technology is about far more that just Bitcoin. Across finance, healthcare, media, government and other sectors, innovative uses are appearing every day.

Here is a list of 35 which I have come across. While some may fail to live up to their promises, others could go on to become household names if blockchain proves itself to be as revolutionary as many are predicting.

Cyber security

Guardtime – This company is creating “keyless” signature systems using blockchain which is currently used to secure the health records of one million Estonian citizens.

REMME is a decetralized authentication system which aims to replace logins and passwords with SSL certificates stored on a blockchain.

Healthcare

Gem – This startup is working with the Centre for Disease Control to put disease outbreak data onto a blockchain which it says will increase effectiveness of disaster relief and response.

SimplyVital Health – Has two health-related blockchain products in development, ConnectingCare which tracks the progress of patients after they leave hospital, and Health Nexus, which aims to provide decentralized blockchain patient records.

MedRec – An MIT project involving blockchain electronic medical records designed to manage authentication, confidentiality and data sharing.

Financial services

ABRA – A cryptocurrency wallet which uses the Bitcoin blockchain to hold and track balances stored in different currencies.

Bank Hapoalim – A collaboration between the Israeli bank and Microsoft to create a blockchain system for managing bank guarantees.

Barclays – Barclays has launched a number of blockchain initiatives involving tracking financial transactions, compliance and combating fraud. It states that “Our belief …is that blockchain is a fundamental part of the new operating system for the planet.”

Maersk – The shipping and transport consortium has unveiled plans for a blockchain solution for streamlining marine insurance.

Aeternity – Allows the creation of smart contracts which become active when network consensus agrees that conditions have been met – allowing for automated payments to be made when parties agree that conditions have been met, for example.

Augur – Allows the creation of blockchain-based predictions markets for trading of derivatives and other financial instruments in a decetralized ecosystem

Manufacturing and industrial

Provenance – This project aims to provide a blockchain-based provenance record of transparency within supply chains.

Jiocoin – India’s biggest conglomerate, Reliance Industries, has said that it is developing a blockchain-based supply chain logistics platform along with its own cryptocurrency, Jiocoin.

Hijro – Previously known as Fluent, aims to create a blockchain framework for collaborating on prototyping and proof-of-concept.

SKUChain – Another blockchain system for allowing tracking and tracing of goods as they pass through a supply chain.

Blockverify – A blockchain platform which focuses on anti-counterfeit measures, with initial use cases in the diamond, pharmaceuticals and luxury goods markets.

Transactivgrid – A business-led community project based in Brooklyn allowing members to locally produce and cell energy, with the goal of reducing costs involved in energy distribution.

STORJ.io – Distributed and encrypted cloud storage, which allows users to share unused hard drive space.

Government

Dubai – Dubai has set sights on becoming the world’s first blockchain-powered state. In 2016 representatives of 30 government departments formed a committee dedicated to investigating opportunities across health records, shipping, business registration and preventing the spread of conflict diamonds.

Estonia – The Estonian government has partnered with Ericsson on an initiative involving creating a new data center to move public records onto the blockchain. 20

South Korea – Samsung is creating blockchain solutions for the South Korean government which will be put to use in public safety and transport applications.

Govcoin – The UK Department of Work and Pensions is investigating using blockchain technology to record and administer benefit payments.

Democracy.earth – This is an open-source project aiming to enable the creation of democratically structured organizations, and potentially even states or nations, using blockchain tools.

Followmyvote.com – Allows the creation of secure, transparent voting systems, reducing opportunities for voter fraud and increasing turnout through improved accessibility to democracy.

Charity

Bitgive – This service aims to provide greater transparency to charity donations, and clearer links between giving and project outcomes. It is working with established charities including Save The Children, The Water Project and Medic Mobile.

Retail

OpenBazaar – OpenBazaar is an attempt to build a decentralized market where goods and services can be traded with no middle-man.

Loyyal – This is a blockchain-based universal loyalty framework, which aims to allow consumers to combine and trade loyalty rewards in new ways, and retailers to offer more sophisticated loyalty packages.

Blockpoint.io – Allows retailers to build payment systems around blockchain currencies such as Bitcoin, as well as blockchain derived gift cards and loyalty schemes.

Real Estate

Ubiquity – This startup is creating a blockchain-driven system for tracking the complicated legal process which creates friction and expense in real estate transfer.

Transport and Tourism

IBM Blockchain Solutions – IBM has said it will go public with a number of non-finance related blockchain initiatives with global partners in 2018. This video envisages how efficiencies could be driven in the vehicle leasing industry.

Arcade City – An application which aims to beat Uber at their own game by moving ride sharing and car hiring onto the blockchain.

La’Zooz – A community-owned platform for synchronizing empty seats with passengers in need of a lift in real-time.

Webjet – The online travel portal is developing a blockchain solution to allow stock of empty hotel rooms to be efficiently tracked and traded , with payment fairly routed to the network of middle-men sites involved in filling last-minute vacancies.

Media

Kodak – Kodak recently sent its stock soaring after announcing that it is developing a blockchain system for tracking intellectual property rights and payments to photographers.

Ujomusic – Founded by singer songwriter Imogen Heap to record and track royalties for musicians, as well as allowing them to create a record of ownership of their work.

It is exiting to see all these developments. I am sure not all of these will make it into successful long-term ventures but if they indicate one thing, then it is the vast potential the blockchain technology is offering.

Original article here.

 


standard

CEOs should do 3 things to help their workforce fully embrace AI

2018-02-07 - By 

There’s no denying it: the era of the intelligent enterprise is upon us. As technologies like AI, cognitive computing, and predictive analytics become hot topics in the corporate boardroom, sleek startups and centuries-old companies alike are laying plans for how to put these exciting innovations to work.

Many organizations, however, are in the nascent days of AI implementation. Of the three stages of adoption—education, prototyping, and application at scale—most executives are still taking a tentative approach to exploring AI’s true potential. They’re primarily using the tech to drive small-scale efficiencies.

But AI’s real opportunity lies in tapping completely new areas of value. AI can help established businesses expand their product offerings and infiltrate (or even invent) entirely new markets, as well as streamline internal processes. The rewards are ripe: According to Accenture projections, fully committing to AI could boost global profits by a whopping $4.8 trillion by 2022. For the average S&P 500 company, this would mean an additional $7.5 billion in revenue over the next four years.

There’s no denying it: the era of the intelligent enterprise is upon us. As technologies like AI, cognitive computing, and predictive analytics become hot topics in the corporate boardroom, sleek startups and centuries-old companies alike are laying plans for how to put these exciting innovations to work.

Many organizations, however, are in the nascent days of AI implementation. Of the three stages of adoption—education, prototyping, and application at scale—most executives are still taking a tentative approach to exploring AI’s true potential. They’re primarily using the tech to drive small-scale efficiencies.

But AI’s real opportunity lies in tapping completely new areas of value. AI can help established businesses expand their product offerings and infiltrate (or even invent) entirely new markets, as well as streamline internal processes. The rewards are ripe: According to Accenture projections, fully committing to AI could boost global profits by a whopping $4.8 trillion by 2022. For the average S&P 500 company, this would mean an additional $7.5 billion in revenue over the next four years.

Faster and more compelling innovations will be driven by the intersection of intelligent technology and human ingenuity—yes, the workforce will see sweeping changes. It’s already happening: Nearly all leaders report they’ve redesigned jobs to at least some degree to account for disruption. What’s more, while three-quarters of executives plan to use AI to automate tasks, virtually all intend to use it to augment the capabilities of their people.

Still, many execs report they’re not sure how to pull off what Accenture calls Applied Intelligence—the ability to implement intelligent technology and human ingenuity to drive growth.

Here are three steps the C-suite can take to elevate their organizations into companies that are fully committed to creating new value with AI/human partnerships.

1) Reimagine what it means to work

The most significant impact of AI won’t be on the number of jobs, but on job content. But skepticism about employees’ willingness to embrace AI is unfounded. Though nearly 25% of executives cite “resistance by the workforce” as one of the obstacles to integrating AI, 62% of workers—both low-skilled and high-skilled—are optimistic about the changes AI will bring. In line with that optimism, 67% of employees say it is important to develop the skills to work with intelligent technologies.

Yes, ambiguities about exactly how automation fits into the future workplace persist. Business leaders should look for specific skills and tasks to supplement instead of thinking about replacing generalized “jobs”. AI is less about replacing humans than it is about augmenting their roles.

While AI takes over certain repetitive or routine tasks, it opens doors for project-based work. For example, if AI steps into tasks like sorting email or analyzing inventory, it can free up employees to develop more in-depth customer service tactics, like targeted conversations with key clients. Interestingly, data suggests that greater investment in AI could actually increase employment by as much as 10% in the next three years alone.

2) Pivot the workforce to your company’s unique value proposition

Today, AI and human-machine collaboration is beginning to change how enterprises conduct business. But it has yet to transform whatbusiness they choose to pursue. Few companies are creating entirely new revenue streams or customer experiences. But, at least, 72% of execs agree that adopting intelligent technologies will be critical to their organization’s ability to differentiate in the market.

One of the challenges facing companies is how to make a business case for that pivot to new opportunities without disrupting today’s core business. A key part is turning savings generated by automation into the fuel for investing in the new business models and workforces that will ultimately take a company into new markets.

Take Accenture: the company puts 60% of the money it saves through AI investments into training programs. That’s resulted in the retraining of tens of thousands of people whose roles were automated. Those workers can now focus on more high-value projects, working with AI and other technologies to offer better services to clients.

3) Scaling new skilling: Don’t choose between hiring a human or a machine—hire both

Today, most people already interact with machines in the workplace—but humans still run the show. CEOs still value a number of decidedly human skills—resource management, leadership, communication skills, complex problem solving, and judgment—but in the future, human ingenuity will not suffice. Working in tandem, smarter machines and better skilled humans will likely drive swifter and more compelling innovations.

To scale up new skilling, employers may want to consider these three steps:

  1. Prioritize skills to be honed. While hard skills like data analytics, engineering, or coding are easy to define, innately “human” skills like ethical decision-making and complex problem-solving need to be considered carefully.
  2. Provide targeted training programs. Employees’ level of technical expertise, willingness to learn new technologies, and specific skill sets will determine how training programs should be developed across the organization.
  3. Use digital solutions for training. Taking advantage of cutting-edge technologies like virtual reality or augmented reality can teach workers how to interact with smart machinery via realistic simulations.

While it is natural for businesses to exploit AI to drive efficiencies in the short term, their long term growth depends on using AI far more creatively. It will take new forms of leadership and imagination to prepare the future workforce to truly partner with intelligent machines. If they succeed, it will be a case of humans helping AI help humans.

Original article here.

 


standard

MIT is aiming for AI moonshots with Intelligence Quest

2018-02-01 - By 

Artificial intelligence has long been a focus for MIT. The school’s been researching the space since the late ’50s, giving rise (and lending its name) to the lab that would ultimately become known as CSAIL. But the Cambridge university thinks it can do more to elevate the rapidly expanding field.

This week, the school announced the launch of the MIT Intelligence Quest, an initiative aimed at leveraging its AI research into something it believes could be game-changing for the category. The school has divided its plan into two distinct categories: “The Core” and “The Bridge.”

“The Core is basically reverse-engineering human intelligence,” dean of the MIT School of Engineering Anantha Chandrakasan tells TechCrunch, “which will give us new insights into developing tools and algorithms, which we can apply to different disciplines. And at the same time, these new computer science techniques can help us with the understanding of the human brain. It’s very tightly linked between cognitive science, near science and computer science.”

The Bridge, meanwhile, is designed to provide access to AI and ML tools across its various disciplines. That includes research from both MIT and other schools, made available to students and staff.

“Many of the products are moonshoots,” explains James DiCarlo, head of the Department of Brain and Cognitive Sciences. “They involve teams of scientists and engineers working together. It’s essentially a new model and we need folks and resources behind that.”

Funding for the initiative will be provided by a combination of philanthropic donations and partnerships with corporations. But while the school has had blanket partnerships in the past, including, notably, the MIT-IBM Watson AI Lab, the goal here is not to become beholden to any single company. Ideally the school will be able to work alongside a broad range of companies to achieve its large-scale goals.

“Imagine if we can build machine intelligence that grows the way a human does,” adds professor of Cognitive Science and Computation, Josh Tenenbaum. “That starts like a baby and learns like a child. That’s the oldest idea in AI and it’s probably the best idea… But this is a thing we can only take on seriously now and only by combining the science and engineering of intelligence.”

Original article here.

 


standard

These Smart Banknotes Could Bring Crypto To The Masses

2018-01-30 - By 

2017 saw cryptocurrencies take us by storm; bitcoin’s meteoric rise woke the world up to the possibilities of distributed ledgers and their potential impact. It sent investors into a flurry of speculation and FOMO. At one point the digital currency surged more than 1,900%. All the talk has even got Wall Street dipping their toes in. It seems crypto is no longer considered an ephemeral rush and the technology behind it, blockchain, is proving as profoundly revolutionary as the internet was and is.

Every successful technology navigates a Cambrian era of growth before it figures out what it’s best used for. Blockchain and cryptocurrencies are arguably in their one-size fits all stage. The issue being one-size never fits all. What of the sceptic, the technically unsophisticated, the conservative, the one sitting on the fence? How do we get crypto to them? People love crypto because it’s a decentralised trust-less system that needs no middleman — it allows digital exchange of value using existing computing power. That’s great! But managing private keys and buying and selling crypto is complex; you need to open an account on an exchange, get a wallet, manage keys and passwords. In most countries you need to pass lengthly and complex Know Your Customer hurdles. There’s just so much friction involved; a transaction takes a long time, uses a lot of energy and involves a lot of risk (bitcoin is very easy to lose). The sceptic or the average Joe just isn’t going to bother. Would not something tangible, accessible, easy to grasp and less of an illusion be so much easier? Like a physical crypto bank note? Why not allow the masses to skip this digital cash metaphor and revert to something simpler, almost reminiscent of China’s easily receivable Hong Bao (红包). Here I chat with Andrew Pantyukhin, Co-Founder at Tangem, who is changing this paradigm and bringing physical crypto to the masses.

What does Tangem do?

Tangem is the first physical manifestation of digital assets. We are the first real physical bitcoin — the first tangible bitcoin. Tangem notes are smart banknotes with a special chip that carries cryptocurrencies or any other digital assets. With these banknotes you can conduct physical crypto transactions by just handing them over or receiving them. Unlike using crypto currency online, physical transactions are immediate, free, anonymous and there are no fees. They are truly decentralized, meaning it will never be restricted by technological limitations.

Where can you get them?

You will be able to get them all over the world, from corner stores, retail chains, special ATMs, or people that already have them. You use them exactly like cash, but it’s not fiat currency backed by a government, it’s crypto!

Why did you create Tangem?

It’s like champagne — it was a bit of an accident! We have one of the most unique microelectronics teams in the world that can program secure elements natively — it’s a very rare set of skills. When cryptocurrencies started gaining traction around 2014— 2015, we started researching what we could do in this field and how we could apply ourselves. We thought about smart cards that would carry value but it was impractical at the time. The chips were too slow, lacked elliptic cryptography support, were too insecure, or too power hungry, or prohibitively bulky and expensive.
Because of our microelectronics exposure we had good working relationships with all major chipset vendors in the world like NXP, Samsung and Infineon. At one point when we were talking to one of them, trying to implement cryptocurrencies on smart cards, they told us that they had on their unannounced roadmap a chip family that would do everything we needed at a great yield and price point. We got relevant information and specs of the chip, and samples months before anyone else in the market. Now there are several such chips in the market and we became the first major client and use case for them in the world.

Why is it possible now?

In 2017 we saw a minor breakthrough in chip technology. Historically we had two directions in embedded chips: one that was super secure, designed to be «unhackable» — the so called «Secure Element», and the other that was powerful and versatile enough to handle elliptic curve cryptography and complex calculations. Last year certain types of secure elements gained support for advanced cryptography, embedded flash memory, while achieving even higher levels of security certifications, lower power consumption and incredible affordability. Even the 65 nanometer variants are extremely thin, small and physically resilient.

Why are these smart banknotes considered unhackable?

It makes the cost of hacking a single banknote uneconomical that it’s not worth doing it. Moreover, hacking a single banknote doesn’t give you access to other banknotes.

The tamper-proof chip technology has been developed and continuously improved for decades for military and government applications — like identification and access control, or for the financial services and telecom industry, recent credit cards and SIM cards. The technology addresses all known attack vectors on hardware and software levels.

Why does the world need a smart banknote when everything is digital?

It’s really very simple. Crypto is still very difficult to use; it requires a steep learning curve. The users have to go through so many steps that are complex and tiresome. With a physical bank note all you need is the bank note and there is no need to learn or know anything about crypto currency. Everyone knows how cash works. We don’t need to teach you anything. Plus everyone knows how to keep things physically safe — you don’t need highly sophisticated digital skills.

What’s the market size?

Today we believe there are only around five million people actively trading and using cryptocurrencies, i.e. having over 100 dollars in crypto and most likely 20 million wallets. The global awareness of cryptocurrencies is about a billion people today. We believe the demand will come from that one billion and that’s the market we are going after. That’s the current demand and it will grow quickly to seven billion once we remove the barriers to use.

Besides cryptocurrencies what are the other applications?

We are still treating cryptocurrencies with our perception of fiat currencies; controlled, centralised and tied to GDP. What we don’t yet appreciate is what happens when anyone can release their own private, regional, industrial, corporate currencies at almost no cost and circulate them infinitely throughout their employees, partners, customers — that would qualitatively change everything we know about currencies, economics and monetary mechanics. I think that’s the most interesting effect we are going to see.

So it’s not just about existing money, the whole definition and perception of cash is going to change once we drop the cost of introducing a new currency to almost zero.

Of course, we are also thinking about going after other segments. These chips are super secure, they can be used for government identification or commercially issued identification. They could be used for loyalty cards, gift cards, ticketing, any applications that require digital proof of something physical, or physical proof of digital assets. We are a new way of tying the physical and digital together, which has never been done before. Inherently we treat digital as easily copiable and this technology guarantees it cannot be copied. That again has never been possible or practical before.

On that note, is there anyone else that is doing what you are doing or similar to what you are doing?

The set of technologies we use is emerging and will be available to everyone in the coming years. We were very lucky to have most of the required software stack and talent even before the latest advances became available. So we could just divert our engineering resources to the new project. That was extremely lucky. It took us altogether about three years to develop that software stack and expertise — it would take a minimum of one to two years for a competitor with unlimited funds to get to the same level of functionality and security. Obviously, by the time they get there we hope to be light years ahead.

How expensive is it?

Current production cost for us is under $2 per item — we’re making millions of units now. When scaling it to billions of units it will be in the same ballpark as modern paper bank notes. It’s a no brainer for most governments to switch their legal tenders to this tech in the future. One of our long term goals is to extend the national blockchains that certain governments are developing to their physical currencies.

Finally, what’s the next goal for Tangem? We’ve developed the technology to grow cryptocurrencies to the first billion people, now it’s also up to us to develop distribution and commercial partnerships to physically get this technology in the hands of billions of people around the world.

Original article here.

 

 


standard

Content Types for Promoting Your Product, Service and Business

2018-01-26 - By 

Coming up with original ideas to feed the ever-increasing content demands of your readers, subscribers, prospects, customers and social fans is hard work! The lack of a single customer view was cited as the top barrier to successful cross-channel marketing in a recent Experian study, and it’s contributed to the skyrocketing volume of content we need to reach various audience segments where and when they prefer to consume content.

We also need to consider the devices on which consumers will access our content. Is it mobile-friendly? Is it visible and legible on small screens? Do you have longer form content for those who need more information to make a decision?

Of course, you then have to think of content formats — whether the message you’re trying to convey will come across best in written form, audio, video, visually, etc.

To that end, I found this awesome list of content formats infographic to help marketers get inspired and get out of the original content creation rut. There are 44 different content formats in this visual, which you can keep as a cheat sheet and refer to when you need ideas.

It’s also helpful for repurposing content. Make sure you’re getting the most mileage out of your content by repurposing it for your different audience segments. For example, that webinar you hosted can be repurposed into a summary blog post. The images from the PowerPoint you used in the webinar can become standalone graphics that you can share via social media. You can release the audio portion only of your webinar as a podcast, perhaps with supporting collateral like an e-book.

Check out this list of 44 content formats you can use to add flavor and variety to your content strategy:

 

Original article here.

 


standard

Google’s Accelerated Mobile Pages (AMP)

2018-01-18 - By 

Starting AMP from scratch is great, but what if you already have an existing site? Learn how you can convert your site to AMP using AMP HTML.

“What’s Allowed in AMP and What Isn’t”: https://goo.gl/ugMhHc

Tutorial on how to convert HTML to AMP: https://goo.gl/JwUVyG

Reach out with your AMP related questions: https://goo.gl/UxCWfz

Watch all Amplify episodes: https://goo.gl/B9CCl4

Subscribe to the The AMP Channel and never miss an Amplify episode: https://goo.gl/g2Y8h7

 

 

Original video here.


standard

IBM Fueling 2018 Cloud Growth With 1,900 Cloud Patents Plus Blazingly Fast AI-Optimized Chip

2018-01-17 - By 

CLOUD WARS — Investing in advanced technology to stay near the top of the savagely competitive enterprise-cloud market, IBM earned more than 1,900 cloud-technology patents in 2017 and has just released an AI-optimized chip said to have 10 times more IO and bandwidth than its nearest rival.

IBM is coming off a year in which it stunned many observers by establishing itself as one of the world’s top three enterprise-cloud providers—along with Microsoft and Amazon—by generating almost $16 billion in cloud revenue for the trailing 12 months ended Oct. 31, 2017.

While that $16-billion cloud figure pretty much matched the cloud-revenue figures for Microsoft and Amazon, many analysts and most media observers continue—for reasons I cannot fathom—to fail to acknowledge IBM’s stature as a broad-based enterprise-cloud powerhouse whose software capabilities position the company superbly for the next wave of cloud growth in hybrid cloud, PaaS, and SaaS.

And IBM, which announces its Q4 and annual earnings results on Thursday, Jan. 18, is displaying its full commitment to remaining among the top ranks of cloud vendors by earning almost 2,000 patents for cloud technologies in 2017, part of a companywide total of 9,043 patents received last year.

Noting that almost half of those 9,043 patents came from “pioneering advancements in AI, cloud computing, cybersecurity, blockchain and quantum computing,” IBM CEO Ginni Rometty said this latest round of advanced-technology innovation is “aimed at helping our clients create smarter businesses.”

In those cloud-related areas, IBM said its new patents include the following:

  • 1,400 AI patents, including one for an AI system that analyzes and can mirror a user’s speech patterns to make it easier for humans and AI to understand one another.
  • 1,200 cybersecurity patents, “including one for technology that enables AI systems to turn the table on hackers by baiting them into email exchanges and websites that expend their resources and frustrate their attacks.”
  • In machine learning, a system for autonomous vehicles that transfers control of the vehicle to humans “as needed, such as in an emergency.”
  • In blockchain, a method for reducing the number of steps needed to settle transactions among multiple business parties, “even those that are not trusted and might otherwise require a third-party clearinghouse to execute.”

For IBM, the pursuit of new cloud technologies is particularly important because a huge portion of its approximately $16 billion in cloud revenue comes from outside the standard cloud-revenue stream of IaaS, PaaS and SaaS and instead is generated by what I call IBM’s “cloud-conversion” business—an approach unique to IBM.

While IBM rather aridly defines that business as “hardware, software and services to enable IBM clients to implement comprehensive cloud solutions,” the concept comes alive when viewed through the perspective of what those offerings mean to big corporate customers. To understand how four big companies are tapping into IBM’s cloud conversion business, please check out my recent article called Inside IBM’s $7-Billion Cloud-Solutions Business: 4 Great Digital-Transformation Stories.

IBM’s most-recent batch of cloud-technology patents—and IBM has now received more patents per year than any other U.S. company for 25 straight years—includes a patent that an IBM blog post describes this way: “a system that monitors data sources including weather reports, social networks, newsfeeds and network statistics to determine the best uses of cloud resources to meet demand. It’s one of the numerous examples of using unstructured data can help organizations work more efficiently.”

That broad-based approach to researching and developing advanced technology also led to the launch last month of a microchip that IBM says is specifically optimized for artificial-intelligence workloads.

A TechCrunch article about IBM’s new Power9 chip said it will be used not only in the IBM Cloud but also the Google Cloud: “The company intends to sell the chips to third-party manufacturers and to cloud vendors including Google. Meanwhile, it’s releasing a new computer powered by the Power9 chip, the AC922 and it intends to offer the chips in a service on the IBM cloud.”

How does the new IBM chip stack up? The TechCrunch article offered this breathless endorsement of the Power9’s performance from analyst Patrick Moorhead of Moor Insights & Strategy: “Power9 is a chip which has a new systems architecture that is optimized for accelerators used in machine learning. Intel makes Xeon CPUs and Nervana accelerators and NVIDIA makes Tesla accelerators. IBM’s Power9 is literally the Swiss Army knife of ML acceleration as it supports an astronomical amount of IO and bandwidth, 10X of anything that’s out there today.”

It’s shaping up to be a very interesting year from IBM in the cloud, and I’ll be reporting later this week on Thursday’s earnings release.

As businesses jump to the cloud to accelerate innovation and engage more intimately with customers, my Cloud Wars series analyze the major cloud vendors from the perspective of business customers.

 

Original article here.

 


standard

Rethinking Gartner’s Hype Cycle

2018-01-12 - By 

The Gartner hype cycle is one of the more brilliant insights ever uncovered in the history of technology. I rank it right up there with Moore’s Law and Christensen’s model of disruptive innovation from below.

Gartner’s hype cycle describe a 5-stage pattern that almost all new technologies follow:

  1. technology trigger introduces new possibilities — things like AI, chatbots, AR/VR, blockchain, etc. — which capture the imagination and create a rapid rise in expectations. (“Big data is a breakthrough!”)
  2. The fervor quickly reaches a peak of inflated expectations — the “hype” is deafening and dramatically overshoots the reality of what’s possible. (“Big data will change everything!”>
  3. Reality soon sets in though, as people realize that the promises of that hype aren’t coming to fruition. Expectations drop like a rock, and the market slips into a trough of disillusionment. (“Big data isn’t that magical after all.”)
  4. But there is underlying value to the technology, and as it steadily improves, people begin to figure out realistic applications. This is the slope of enlightenment: expectations rise again, but less sharply, in alignment with what’s achievable. (“Big data is actually useful in these cases…”)
  5. Finally the expectations of the technology are absorbed into everyday life, with well-established best practices, leveling off in the plateau of productivity. (“Big data is an ordinary fact of life. Here’s how we use it.”)

It might not be a law of nature, but as a law of technology markets, it’s pretty consistent.

We hear a lot about the hype cycle in the martech world, because we have been inundated with new technologies in marketing. I’m covering a number of them in my 2018 update to the 5 disruptions to marketing: artificial intelligence (AI), conversational interfaces, augmented reality (AR), Internet of Things (IoT), customer data platforms (CDP), etc.

In marketing, it’s not just technologies that follow this hype cycle, but also concepts and tactics, such as content marketing, account-based marketing, revenue operations, and so on. By the way, that’s not a knock against any of those. There is real value in all of them. But the hype exceeds the reality in the first 1/3 or so of their lifecycle.

Indeed, it’s the reality underneath the hype cycle that people lose sight of. Expectations are perception. The actual advancement of the technology (or concept or tactic) is reality.

At the peak of inflated expectations, reality is far below what’s being discussed ad nauseum in blog posts and board rooms. In the tough of disillusionment, the actual, present-day potential is sadly underestimated — discussions shift to the inflated expectations of the next new thing.

However, this desync between expectations and reality is a good thing — if you know what you’re doing. The gap between expectations and reality creates opportunities for a savvy company to manage to the reality while competitors chase the hype cycle.

It’s a variation of the age-old investment advice: buy low, sell high.

At the peak of inflated expectations, you want to avoid overspending on technology and overpromising results. You don’t want to ignore the movement entirely, since there is fire smoldering below the smoke. But you want to evaluate claims carefully, run things with an experimental mindset, and focus on real learning.

In the trough of disillusionment, that’s when you want to pour gas on the fire. Leverage what you learned from your experimental phase to scale up the things you know work, because you’ve proven them in your business.

Don’t be distracted by the backlash of negative chatter at this stage of the hype cycle. Reinvest your experimental efforts in pushing the possibilities ahead of the slope of enlightenment. This is your chance to race ahead of competitors who are pulling back from their missed results against earlier, unrealistic expectations.

As close as possible, you want to track the actual advancement of the technology. If you can achieve that, you’ll get two big wins, as the hype is on the way up and on the way down. You’ll harness the pendulum of the hype cycle into useful energy.

P.S. When I program the MarTech conferenceagenda, my goal is to give attendees as accurate of a picture of the actual advancement of marketing technologies as possible.

I won’t try to sell you a ticket on overinflated expectations. But I will try to sell you a ticket on getting you the ground truth of marketing technology and innovation, so you can capture the two opportunities that are yours to take from the hype cycle.

Our next event is coming up, April 23-25 in San Jose. Our early bird rates expire on January 27, which saves you $500 on all-access passes. Take advantage of that pricing while you can.

 

Original article here.


standard

Researchers implement 3-qubit Grover search on a quantum computer

2018-01-11 - By 

Searching large, unordered databases for a desired item is a time-consuming task for classical computers, but quantum computers are expected to perform these searches much more quickly. Previous research has shown that Grover’s search algorithm, proposed in 1996, is an optimal quantum search algorithm, meaning no other quantum algorithm can search faster. However, implementing Grover’s algorithm on a quantum system has been challenging.

Now in a new study, researchers have implemented Grover’s  with trapped atomic ions. The algorithm uses three qubits, which corresponds to a  of 8 (23) items. When used to search the database for one or two items, the Grover algorithm’s success probabilities were—as expected—significantly higher than the best theoretical success probabilities for .

The researchers, Caroline Figgatt et al., at the University of Maryland and the National Science Foundation, have published a paper on their results in a recent issue of Nature Communications.

“This work is the first implementation of a 3-qubit Grover search algorithm in a scalable  computing system,” Figgatt told Phys.org. “Additionally, this is the first implementation of the algorithm using Boolean oracles, which can be directly compared with a classical search.”

The classical approach to searching a database is straightforward. Basically, the algorithm randomly guesses an item, or “solution.” So, for example, for a single search iteration on a database of 8 items, a classical algorithm makes one random query and, if that fails, it makes a second random guess—in total, guessing 2 out of 8 items, resulting in a 25% success rate.

Grover’s algorithm, on the other hand, first initializes the system in a quantum superposition of all 8 states, and then uses a quantum function called an oracle to mark the correct solution. As a result of these quantum strategies, for a single search iteration on an 8-item database, the theoretical success rate increases to 78%. With a higher success rate comes faster search times, as fewer queries are needed on average to arrive at the correct answer.

In the implementation of Grover’s algorithm reported here, the success rate was lower than the theoretical value—roughly 39% or 44%, depending on the oracle used—but still markedly higher than the classical success rate.

The researchers also tested Grover’s algorithm on databases that have two correct solutions, in which case the theoretical success rates are 47% and 100% for classical and quantum computers, respectively. The implementation demonstrated here achieved success rates of 68% and 75% for the two oracle types—again, better than the highest theoretical value for classical computers.

The researchers expect that, in the future, this implementation of Grover’s algorithm can be scaled up to larger databases. As the size of the database increases, the quantum advantage over classical computers grows even larger, which is where future applications will benefit.

“Moving forward, we plan to continue developing systems with improved control over more qubits,” Figgatt said.

Original article here.

 


Site Search

Search
Exact matches only
Search in title
Search in content
Search in comments
Search in excerpt
Filter by Custom Post Type
 

BlogFerret

Help-Desk
X
Sign Up

Enter your email and Password

Log In

Enter your Username or email and password

Reset Password

Enter your email to reset your password

X
<-- script type="text/javascript">jQuery('#qt_popup_close').on('click', ppppop);