Posted On:January 2018 - AppFerret

standard

These Smart Banknotes Could Bring Crypto To The Masses

2018-01-30 - By 

2017 saw cryptocurrencies take us by storm; bitcoin’s meteoric rise woke the world up to the possibilities of distributed ledgers and their potential impact. It sent investors into a flurry of speculation and FOMO. At one point the digital currency surged more than 1,900%. All the talk has even got Wall Street dipping their toes in. It seems crypto is no longer considered an ephemeral rush and the technology behind it, blockchain, is proving as profoundly revolutionary as the internet was and is.

Every successful technology navigates a Cambrian era of growth before it figures out what it’s best used for. Blockchain and cryptocurrencies are arguably in their one-size fits all stage. The issue being one-size never fits all. What of the sceptic, the technically unsophisticated, the conservative, the one sitting on the fence? How do we get crypto to them? People love crypto because it’s a decentralised trust-less system that needs no middleman — it allows digital exchange of value using existing computing power. That’s great! But managing private keys and buying and selling crypto is complex; you need to open an account on an exchange, get a wallet, manage keys and passwords. In most countries you need to pass lengthly and complex Know Your Customer hurdles. There’s just so much friction involved; a transaction takes a long time, uses a lot of energy and involves a lot of risk (bitcoin is very easy to lose). The sceptic or the average Joe just isn’t going to bother. Would not something tangible, accessible, easy to grasp and less of an illusion be so much easier? Like a physical crypto bank note? Why not allow the masses to skip this digital cash metaphor and revert to something simpler, almost reminiscent of China’s easily receivable Hong Bao (红包). Here I chat with Andrew Pantyukhin, Co-Founder at Tangem, who is changing this paradigm and bringing physical crypto to the masses.

What does Tangem do?

Tangem is the first physical manifestation of digital assets. We are the first real physical bitcoin — the first tangible bitcoin. Tangem notes are smart banknotes with a special chip that carries cryptocurrencies or any other digital assets. With these banknotes you can conduct physical crypto transactions by just handing them over or receiving them. Unlike using crypto currency online, physical transactions are immediate, free, anonymous and there are no fees. They are truly decentralized, meaning it will never be restricted by technological limitations.

Where can you get them?

You will be able to get them all over the world, from corner stores, retail chains, special ATMs, or people that already have them. You use them exactly like cash, but it’s not fiat currency backed by a government, it’s crypto!

Why did you create Tangem?

It’s like champagne — it was a bit of an accident! We have one of the most unique microelectronics teams in the world that can program secure elements natively — it’s a very rare set of skills. When cryptocurrencies started gaining traction around 2014— 2015, we started researching what we could do in this field and how we could apply ourselves. We thought about smart cards that would carry value but it was impractical at the time. The chips were too slow, lacked elliptic cryptography support, were too insecure, or too power hungry, or prohibitively bulky and expensive.
Because of our microelectronics exposure we had good working relationships with all major chipset vendors in the world like NXP, Samsung and Infineon. At one point when we were talking to one of them, trying to implement cryptocurrencies on smart cards, they told us that they had on their unannounced roadmap a chip family that would do everything we needed at a great yield and price point. We got relevant information and specs of the chip, and samples months before anyone else in the market. Now there are several such chips in the market and we became the first major client and use case for them in the world.

Why is it possible now?

In 2017 we saw a minor breakthrough in chip technology. Historically we had two directions in embedded chips: one that was super secure, designed to be «unhackable» — the so called «Secure Element», and the other that was powerful and versatile enough to handle elliptic curve cryptography and complex calculations. Last year certain types of secure elements gained support for advanced cryptography, embedded flash memory, while achieving even higher levels of security certifications, lower power consumption and incredible affordability. Even the 65 nanometer variants are extremely thin, small and physically resilient.

Why are these smart banknotes considered unhackable?

It makes the cost of hacking a single banknote uneconomical that it’s not worth doing it. Moreover, hacking a single banknote doesn’t give you access to other banknotes.

The tamper-proof chip technology has been developed and continuously improved for decades for military and government applications — like identification and access control, or for the financial services and telecom industry, recent credit cards and SIM cards. The technology addresses all known attack vectors on hardware and software levels.

Why does the world need a smart banknote when everything is digital?

It’s really very simple. Crypto is still very difficult to use; it requires a steep learning curve. The users have to go through so many steps that are complex and tiresome. With a physical bank note all you need is the bank note and there is no need to learn or know anything about crypto currency. Everyone knows how cash works. We don’t need to teach you anything. Plus everyone knows how to keep things physically safe — you don’t need highly sophisticated digital skills.

What’s the market size?

Today we believe there are only around five million people actively trading and using cryptocurrencies, i.e. having over 100 dollars in crypto and most likely 20 million wallets. The global awareness of cryptocurrencies is about a billion people today. We believe the demand will come from that one billion and that’s the market we are going after. That’s the current demand and it will grow quickly to seven billion once we remove the barriers to use.

Besides cryptocurrencies what are the other applications?

We are still treating cryptocurrencies with our perception of fiat currencies; controlled, centralised and tied to GDP. What we don’t yet appreciate is what happens when anyone can release their own private, regional, industrial, corporate currencies at almost no cost and circulate them infinitely throughout their employees, partners, customers — that would qualitatively change everything we know about currencies, economics and monetary mechanics. I think that’s the most interesting effect we are going to see.

So it’s not just about existing money, the whole definition and perception of cash is going to change once we drop the cost of introducing a new currency to almost zero.

Of course, we are also thinking about going after other segments. These chips are super secure, they can be used for government identification or commercially issued identification. They could be used for loyalty cards, gift cards, ticketing, any applications that require digital proof of something physical, or physical proof of digital assets. We are a new way of tying the physical and digital together, which has never been done before. Inherently we treat digital as easily copiable and this technology guarantees it cannot be copied. That again has never been possible or practical before.

On that note, is there anyone else that is doing what you are doing or similar to what you are doing?

The set of technologies we use is emerging and will be available to everyone in the coming years. We were very lucky to have most of the required software stack and talent even before the latest advances became available. So we could just divert our engineering resources to the new project. That was extremely lucky. It took us altogether about three years to develop that software stack and expertise — it would take a minimum of one to two years for a competitor with unlimited funds to get to the same level of functionality and security. Obviously, by the time they get there we hope to be light years ahead.

How expensive is it?

Current production cost for us is under $2 per item — we’re making millions of units now. When scaling it to billions of units it will be in the same ballpark as modern paper bank notes. It’s a no brainer for most governments to switch their legal tenders to this tech in the future. One of our long term goals is to extend the national blockchains that certain governments are developing to their physical currencies.

Finally, what’s the next goal for Tangem? We’ve developed the technology to grow cryptocurrencies to the first billion people, now it’s also up to us to develop distribution and commercial partnerships to physically get this technology in the hands of billions of people around the world.

Original article here.

 

 


standard

Content Types for Promoting Your Product, Service and Business

2018-01-26 - By 

Coming up with original ideas to feed the ever-increasing content demands of your readers, subscribers, prospects, customers and social fans is hard work! The lack of a single customer view was cited as the top barrier to successful cross-channel marketing in a recent Experian study, and it’s contributed to the skyrocketing volume of content we need to reach various audience segments where and when they prefer to consume content.

We also need to consider the devices on which consumers will access our content. Is it mobile-friendly? Is it visible and legible on small screens? Do you have longer form content for those who need more information to make a decision?

Of course, you then have to think of content formats — whether the message you’re trying to convey will come across best in written form, audio, video, visually, etc.

To that end, I found this awesome list of content formats infographic to help marketers get inspired and get out of the original content creation rut. There are 44 different content formats in this visual, which you can keep as a cheat sheet and refer to when you need ideas.

It’s also helpful for repurposing content. Make sure you’re getting the most mileage out of your content by repurposing it for your different audience segments. For example, that webinar you hosted can be repurposed into a summary blog post. The images from the PowerPoint you used in the webinar can become standalone graphics that you can share via social media. You can release the audio portion only of your webinar as a podcast, perhaps with supporting collateral like an e-book.

Check out this list of 44 content formats you can use to add flavor and variety to your content strategy:

 

Original article here.

 


standard

Google’s Accelerated Mobile Pages (AMP)

2018-01-18 - By 

Starting AMP from scratch is great, but what if you already have an existing site? Learn how you can convert your site to AMP using AMP HTML.

“What’s Allowed in AMP and What Isn’t”: https://goo.gl/ugMhHc

Tutorial on how to convert HTML to AMP: https://goo.gl/JwUVyG

Reach out with your AMP related questions: https://goo.gl/UxCWfz

Watch all Amplify episodes: https://goo.gl/B9CCl4

Subscribe to the The AMP Channel and never miss an Amplify episode: https://goo.gl/g2Y8h7

 

 

Original video here.


standard

IBM Fueling 2018 Cloud Growth With 1,900 Cloud Patents Plus Blazingly Fast AI-Optimized Chip

2018-01-17 - By 

CLOUD WARS — Investing in advanced technology to stay near the top of the savagely competitive enterprise-cloud market, IBM earned more than 1,900 cloud-technology patents in 2017 and has just released an AI-optimized chip said to have 10 times more IO and bandwidth than its nearest rival.

IBM is coming off a year in which it stunned many observers by establishing itself as one of the world’s top three enterprise-cloud providers—along with Microsoft and Amazon—by generating almost $16 billion in cloud revenue for the trailing 12 months ended Oct. 31, 2017.

While that $16-billion cloud figure pretty much matched the cloud-revenue figures for Microsoft and Amazon, many analysts and most media observers continue—for reasons I cannot fathom—to fail to acknowledge IBM’s stature as a broad-based enterprise-cloud powerhouse whose software capabilities position the company superbly for the next wave of cloud growth in hybrid cloud, PaaS, and SaaS.

And IBM, which announces its Q4 and annual earnings results on Thursday, Jan. 18, is displaying its full commitment to remaining among the top ranks of cloud vendors by earning almost 2,000 patents for cloud technologies in 2017, part of a companywide total of 9,043 patents received last year.

Noting that almost half of those 9,043 patents came from “pioneering advancements in AI, cloud computing, cybersecurity, blockchain and quantum computing,” IBM CEO Ginni Rometty said this latest round of advanced-technology innovation is “aimed at helping our clients create smarter businesses.”

In those cloud-related areas, IBM said its new patents include the following:

  • 1,400 AI patents, including one for an AI system that analyzes and can mirror a user’s speech patterns to make it easier for humans and AI to understand one another.
  • 1,200 cybersecurity patents, “including one for technology that enables AI systems to turn the table on hackers by baiting them into email exchanges and websites that expend their resources and frustrate their attacks.”
  • In machine learning, a system for autonomous vehicles that transfers control of the vehicle to humans “as needed, such as in an emergency.”
  • In blockchain, a method for reducing the number of steps needed to settle transactions among multiple business parties, “even those that are not trusted and might otherwise require a third-party clearinghouse to execute.”

For IBM, the pursuit of new cloud technologies is particularly important because a huge portion of its approximately $16 billion in cloud revenue comes from outside the standard cloud-revenue stream of IaaS, PaaS and SaaS and instead is generated by what I call IBM’s “cloud-conversion” business—an approach unique to IBM.

While IBM rather aridly defines that business as “hardware, software and services to enable IBM clients to implement comprehensive cloud solutions,” the concept comes alive when viewed through the perspective of what those offerings mean to big corporate customers. To understand how four big companies are tapping into IBM’s cloud conversion business, please check out my recent article called Inside IBM’s $7-Billion Cloud-Solutions Business: 4 Great Digital-Transformation Stories.

IBM’s most-recent batch of cloud-technology patents—and IBM has now received more patents per year than any other U.S. company for 25 straight years—includes a patent that an IBM blog post describes this way: “a system that monitors data sources including weather reports, social networks, newsfeeds and network statistics to determine the best uses of cloud resources to meet demand. It’s one of the numerous examples of using unstructured data can help organizations work more efficiently.”

That broad-based approach to researching and developing advanced technology also led to the launch last month of a microchip that IBM says is specifically optimized for artificial-intelligence workloads.

A TechCrunch article about IBM’s new Power9 chip said it will be used not only in the IBM Cloud but also the Google Cloud: “The company intends to sell the chips to third-party manufacturers and to cloud vendors including Google. Meanwhile, it’s releasing a new computer powered by the Power9 chip, the AC922 and it intends to offer the chips in a service on the IBM cloud.”

How does the new IBM chip stack up? The TechCrunch article offered this breathless endorsement of the Power9’s performance from analyst Patrick Moorhead of Moor Insights & Strategy: “Power9 is a chip which has a new systems architecture that is optimized for accelerators used in machine learning. Intel makes Xeon CPUs and Nervana accelerators and NVIDIA makes Tesla accelerators. IBM’s Power9 is literally the Swiss Army knife of ML acceleration as it supports an astronomical amount of IO and bandwidth, 10X of anything that’s out there today.”

It’s shaping up to be a very interesting year from IBM in the cloud, and I’ll be reporting later this week on Thursday’s earnings release.

As businesses jump to the cloud to accelerate innovation and engage more intimately with customers, my Cloud Wars series analyze the major cloud vendors from the perspective of business customers.

 

Original article here.

 


standard

Rethinking Gartner’s Hype Cycle

2018-01-12 - By 

The Gartner hype cycle is one of the more brilliant insights ever uncovered in the history of technology. I rank it right up there with Moore’s Law and Christensen’s model of disruptive innovation from below.

Gartner’s hype cycle describe a 5-stage pattern that almost all new technologies follow:

  1. technology trigger introduces new possibilities — things like AI, chatbots, AR/VR, blockchain, etc. — which capture the imagination and create a rapid rise in expectations. (“Big data is a breakthrough!”)
  2. The fervor quickly reaches a peak of inflated expectations — the “hype” is deafening and dramatically overshoots the reality of what’s possible. (“Big data will change everything!”>
  3. Reality soon sets in though, as people realize that the promises of that hype aren’t coming to fruition. Expectations drop like a rock, and the market slips into a trough of disillusionment. (“Big data isn’t that magical after all.”)
  4. But there is underlying value to the technology, and as it steadily improves, people begin to figure out realistic applications. This is the slope of enlightenment: expectations rise again, but less sharply, in alignment with what’s achievable. (“Big data is actually useful in these cases…”)
  5. Finally the expectations of the technology are absorbed into everyday life, with well-established best practices, leveling off in the plateau of productivity. (“Big data is an ordinary fact of life. Here’s how we use it.”)

It might not be a law of nature, but as a law of technology markets, it’s pretty consistent.

We hear a lot about the hype cycle in the martech world, because we have been inundated with new technologies in marketing. I’m covering a number of them in my 2018 update to the 5 disruptions to marketing: artificial intelligence (AI), conversational interfaces, augmented reality (AR), Internet of Things (IoT), customer data platforms (CDP), etc.

In marketing, it’s not just technologies that follow this hype cycle, but also concepts and tactics, such as content marketing, account-based marketing, revenue operations, and so on. By the way, that’s not a knock against any of those. There is real value in all of them. But the hype exceeds the reality in the first 1/3 or so of their lifecycle.

Indeed, it’s the reality underneath the hype cycle that people lose sight of. Expectations are perception. The actual advancement of the technology (or concept or tactic) is reality.

At the peak of inflated expectations, reality is far below what’s being discussed ad nauseum in blog posts and board rooms. In the tough of disillusionment, the actual, present-day potential is sadly underestimated — discussions shift to the inflated expectations of the next new thing.

However, this desync between expectations and reality is a good thing — if you know what you’re doing. The gap between expectations and reality creates opportunities for a savvy company to manage to the reality while competitors chase the hype cycle.

It’s a variation of the age-old investment advice: buy low, sell high.

At the peak of inflated expectations, you want to avoid overspending on technology and overpromising results. You don’t want to ignore the movement entirely, since there is fire smoldering below the smoke. But you want to evaluate claims carefully, run things with an experimental mindset, and focus on real learning.

In the trough of disillusionment, that’s when you want to pour gas on the fire. Leverage what you learned from your experimental phase to scale up the things you know work, because you’ve proven them in your business.

Don’t be distracted by the backlash of negative chatter at this stage of the hype cycle. Reinvest your experimental efforts in pushing the possibilities ahead of the slope of enlightenment. This is your chance to race ahead of competitors who are pulling back from their missed results against earlier, unrealistic expectations.

As close as possible, you want to track the actual advancement of the technology. If you can achieve that, you’ll get two big wins, as the hype is on the way up and on the way down. You’ll harness the pendulum of the hype cycle into useful energy.

P.S. When I program the MarTech conferenceagenda, my goal is to give attendees as accurate of a picture of the actual advancement of marketing technologies as possible.

I won’t try to sell you a ticket on overinflated expectations. But I will try to sell you a ticket on getting you the ground truth of marketing technology and innovation, so you can capture the two opportunities that are yours to take from the hype cycle.

Our next event is coming up, April 23-25 in San Jose. Our early bird rates expire on January 27, which saves you $500 on all-access passes. Take advantage of that pricing while you can.

 

Original article here.


standard

Researchers implement 3-qubit Grover search on a quantum computer

2018-01-11 - By 

Searching large, unordered databases for a desired item is a time-consuming task for classical computers, but quantum computers are expected to perform these searches much more quickly. Previous research has shown that Grover’s search algorithm, proposed in 1996, is an optimal quantum search algorithm, meaning no other quantum algorithm can search faster. However, implementing Grover’s algorithm on a quantum system has been challenging.

Now in a new study, researchers have implemented Grover’s  with trapped atomic ions. The algorithm uses three qubits, which corresponds to a  of 8 (23) items. When used to search the database for one or two items, the Grover algorithm’s success probabilities were—as expected—significantly higher than the best theoretical success probabilities for .

The researchers, Caroline Figgatt et al., at the University of Maryland and the National Science Foundation, have published a paper on their results in a recent issue of Nature Communications.

“This work is the first implementation of a 3-qubit Grover search algorithm in a scalable  computing system,” Figgatt told Phys.org. “Additionally, this is the first implementation of the algorithm using Boolean oracles, which can be directly compared with a classical search.”

The classical approach to searching a database is straightforward. Basically, the algorithm randomly guesses an item, or “solution.” So, for example, for a single search iteration on a database of 8 items, a classical algorithm makes one random query and, if that fails, it makes a second random guess—in total, guessing 2 out of 8 items, resulting in a 25% success rate.

Grover’s algorithm, on the other hand, first initializes the system in a quantum superposition of all 8 states, and then uses a quantum function called an oracle to mark the correct solution. As a result of these quantum strategies, for a single search iteration on an 8-item database, the theoretical success rate increases to 78%. With a higher success rate comes faster search times, as fewer queries are needed on average to arrive at the correct answer.

In the implementation of Grover’s algorithm reported here, the success rate was lower than the theoretical value—roughly 39% or 44%, depending on the oracle used—but still markedly higher than the classical success rate.

The researchers also tested Grover’s algorithm on databases that have two correct solutions, in which case the theoretical success rates are 47% and 100% for classical and quantum computers, respectively. The implementation demonstrated here achieved success rates of 68% and 75% for the two oracle types—again, better than the highest theoretical value for classical computers.

The researchers expect that, in the future, this implementation of Grover’s algorithm can be scaled up to larger databases. As the size of the database increases, the quantum advantage over classical computers grows even larger, which is where future applications will benefit.

“Moving forward, we plan to continue developing systems with improved control over more qubits,” Figgatt said.

Original article here.

 


standard

Spectre, Meltdown: Critical CPU Security Flaws Explained

2018-01-04 - By 

Over the past few days we’ve covered major new security risks that struck at a number of modern microprocessors from Intel and to a much lesser extent, ARM and AMD. Information on the attacks and their workarounds initially leaked out slowly, but Google has pushed up its timeline for disclosing the problems and some vendors, like AMD, have issued their own statements. The two flaws in question are known as Spectre and Meltdown, and they both relate to one of the core capabilities of modern CPUs, known as speculative execution.

Speculative execution is a performance-enhancing technique virtually all modern CPUs include to one degree or another. One way to increase CPU performance is to allow the core to perform calculations it may need in the future. The different between speculative execution and “execution” is that the CPU performs these calculations before it knows whether it’ll actually be able to use the results.

Here’s how Google’s Project Zero summarizes the problem: “We have discovered that CPU data cache timing can be abused to efficiently leak information out of mis-speculated execution, leading to (at worst) arbitrary virtual memory read vulnerabilities across local security boundaries in various contexts.”

Meltdown is Variant 3 in ARMAMD, and Google parlance. Spectre accounts for Variant 1 and Variant 2.

Meltdown

“On affected systems, Meltdown enables an adversary to read memory of other processes or virtual machines in the cloud without any permissions or privileges, affecting millions of customers and virtually every user of a personal computer.”

Intel is badly hit by Meltdown because its speculative execution methods are fairly aggressive. Specifically, Intel CPUs are allowed to access kernel memory when performing speculative execution, even when the application in question is running in user memory space. The CPU does check to see if an invalid memory access occurs, but it performs the check after speculative execution, not before. Architecturally, these invalid branches never execute — they’re blocked — but it’s possible to read data from affected cache blocks even so.

The various OS-level fixes going into macOS, Windows, and Linux all concern Meltdown. The formal PDF on Meltdown notes that the software patches Google, Apple, and Microsoft are working on are a good start, but that the problem can’t be completely fixed in software. AMD and ARM appear largely immune to Meltdown, though ARM’s upcoming Cortex-A75 is apparently impacted.

Spectre

Meltdown is bad, but Meltdown can at least be ameliorated in software (with updates), even if there’s an associated performance penalty. Spectre is the name given to a set of attacks that “involve inducing a victim to speculatively perform operations that would not occur during correct program execution, and which leak the victim’s confidential information via a side channel to the adversary.”

Unlike Meltdown, which impacts mostly Intel CPUs, Spectre’s proof of concept works against everyone, including ARM and AMD. Its attacks are pulled off differently — one variant targets branch prediction — and it’s not clear there are hardware solutions to this class of problems, for anyone.

What Happens Next

Intel, AMD, and ARM aren’t going to stop using speculative execution in their processors; it’s been key to some of the largest performance improvements we’ve seen in semiconductor history. But as Google’s extensive documentation makes clear, these proof-of-concept attacks are serious. Neither Spectre nor Meltdown relies on any kind of software bug to work. Meltdown can be solved through hardware design and software rearchitecting; Spectre may not.

When reached for comment on the matter, Linux creator Linux Torvalds responded with the tact that’s made him legendary. “I think somebody inside of Intel needs to really take a long hard look at their CPU’s, and actually admit that they have issues instead of writing PR blurbs that say that everything works as designed,” Torvalds writes. “And that really means that all these mitigation patches should be written with ‘not all CPU’s are crap’ in mind. Or is Intel basically saying ‘We are committed to selling you shit forever and ever, and never fixing anything? Because if that’s the case, maybe we should start looking towards the ARM64 people more.”

It does appear, as of this writing, that Intel is disproportionately exposed on these security flaws. While Spectre-style attacks can affect all CPUs, Meltdown is pretty Intel-specific. Thus far, user applications and games don’t seem much impacted, but web servers and potentially other workloads that access kernel memory frequently could run markedly slower once patched.

 

Original article here.

 


Site Search

Search
Exact matches only
Search in title
Search in content
Search in comments
Search in excerpt
Filter by Custom Post Type
 

BlogFerret

Help-Desk
X
Sign Up

Enter your email and Password

Log In

Enter your Username or email and password

Reset Password

Enter your email to reset your password

X
<-- script type="text/javascript">jQuery('#qt_popup_close').on('click', ppppop);