Posted On:April 2018 - AppFerret

standard

F-Secure Hack Can Unlock Millions of Hotel Rooms With Handheld Device

2018-04-27 - By 

It’s very rare these days that a hotel will give you a real key when you check in. Instead, most chain hotels and mid-sized establishments have switched over to electronic locks with a keycard system. As researchers from F-Secure have discovered, these electronic locks may not be very secure. Researchers from the company have managed to create a “master key” for a popular brand of hotel locks that can unlock any door.

The team began this investigation more than a decade ago, when an F-Secure employee had a laptop stolen from a hotel room. Some of the staff began to wonder how easy it would be to hack the keycard locks, so they set out to do it themselves. The researchers are quick to point out this has not been a focus of F-Secure for 10 years — it took several thousand total man-hours, mostly in the last couple years.

F-Secure settled on cracking the Vision by VingCard system built by Swedish lock manufacturer Assa Abloy. These locks are used in more than 42,000 properties in 166 countries. The project was a huge success, too. F-Secure reports they can create a master key in about a minute that unlocks any door in a hotel. That’s millions of potentially vulnerable hotel rooms around the world.

The hack involves a small handheld computer and an RFID reader (it also works with older magnetic stripe cards). All the researchers need to pull off the hack is a keycard from a hotel. It doesn’t even have to be an active one. Even old and invalid cards have the necessary data to reconstruct the keys that unlock doors. The custom software then generates a key with full privileges that can bypass all the locks in a building. Many hotels use these keys not only for guest rooms, but also elevators and employee-only areas of the hotel.

F-Secure disclosed the hack to Assa Abloy last year, and the lock maker developed a software patch to fix the issue. It’s available for customers to download now, but there’s one significant problem. The firmware on each lock needs an update, and there’s no guarantee every hotel with this system will have the resources to do that. Many of them might not even know the vulnerability exists. This hack could work for a long time to come, but F-Secure isn’t making the attack tools generally available. Anyone who wants to compromise these locks will have to start from scratch.

Original article here.

 


standard

Serverless is eating the stack and people are freaking out — as they should be

2018-04-20 - By 

AWS Lambda has stamped a big DEPRECATED on containers – Welcome to “Serverless Superheroes”! 
In this space, I chat with the toolmakers, innovators, and developers who are navigating the brave new world of “serverless” cloud applications.

In this edition, I chatted with Steven Faulkner, a senior software engineer at LinkedIn and the former director of engineering at Bustle. The following interview has been edited and condensed for clarity.

Forrest Brazeal: At Bustle, your previous company, I heard you cut your hosting costs by about forty percent when you switched to serverless. Can you speak to where all that money was going before, and how you were able to make that type of cost improvement?

Steven Faulkner: I believe 40% is where it landed. The initial results were even better than that. We had one service that was costing about $2500 a month and it went down to about $500 a month on Lambda.

Bustle is a media company — it’s got a lot of content, it’s got a lot of viral, spiky traffic — and so keeping up with that was not always the easiest thing. We took advantage of EC2 auto-scaling, and that worked … except when it didn’t. But when we moved to Lambda — not only did we save a lot of money, just because Bustle’s traffic is basically half at nighttime what it is during the day — we saw that serverless solved all these scaling headaches automatically.

On the flip side, did you find any unexpected cost increases with serverless?

There are definitely things that cost more or could be done cheaper not on serverless. When I was at Bustle they were looking at some stuff around data pipelines and settled on not using serverless for that at all, because it would be way too expensive to go through Lambda.

Ultimately, although hosting cost was an interesting thing out of the gate for us, it quickly became a relative non-factor in our move to serverless. It was saving us money, and that was cool, but the draw of serverless really became more about the velocity with which our team could develop and deploy these applications.

At Bustle, we only have ever had one part-time “ops” person. With serverless, those responsibilities get diffused across our team, and that allowed us all to focus more on the application and less on how to get it deployed.

Any of us who’ve been doing serverless for a while know that the promise of “NoOps” may sound great, but the reality is that all systems need care and feeding, even ones you have little control over. How did your team keep your serverless applications running smoothly in production?

I am also not a fan of the term “NoOps”; it’s a misnomer and misleading for people. Definitely out of the gate with serverless, we spent time answering the question: “How do we know what’s going on inside this system?”

IOPipe was just getting off the ground at that time, and so we were one of their very first customers. We were using IOPipe to get some observability, then CloudWatch sort of got better, and X-Ray came into the picture which made things a little bit better still. Since then Bustle also built a bunch of tooling that takes all of the Lambda logs and data and does some transformations — scrubs it a little bit — and sends it to places like DataDog or to Scalyr for analysis, searching, metrics and reporting.

But I’m not gonna lie, I still don’t think it’s super great. It got to the point where it was workable and we could operate and not feel like we were always missing out on what was actually going on, but there’s a lot of room for improvement.

Another common serverless pain point is local development and debugging. How did you handle that?

I wrote a framework called Shep that Bustle still uses to deploy all of our production applications, and it handles the local development piece. It allows you to develop a NodeJS application locally and then deploy it to Lambda. It could do environment variables before Lambda had environment variables, and have some sanity around versioning and using webpack to bundle. All the the stuff that you don’t really want the everyday developer to have to worry about.

I built Shep in my first couple of months at Bustle, and since then, the Serverless Framework has gotten better. SAM has gotten better. The whole entire ecosystem has leveled up. If I was doing it today I probably wouldn’t need to write Shep. But at the time, that’s definitely when we had to do.

You’re putting your finger on an interesting reality with the serverless space, which is: it’s evolving so fast that it’s easy to create a lot of tooling and glue code that becomes obsolete very quickly. Did you find this to be true?

That’s extremely fair to say. I had a little Twitter thread around this a couple months ago, having a bit of a realization myself that Shep is not the way I would do deployments anymore. When AWS releases their own tooling, it always seems to start out pretty bad, so the temptation is to fill in those gaps with your own tool.

But AWS services change and get better at a very rapid rate. So I think the lesson I learned is lean on AWS as much as possible, or build on top of their foundation and make it pluggable in a way that you can just revert to the AWS tooling when it gets better.

Honestly, I don’t envy a lot of the people who sliced their piece of the serverless pie based on some tool they’ve built. I don’t think that’s necessarily a long term sustainable thing.

As I talk to developers and sysadmins, I feel like I encounter a lot of rage about serverless as a concept. People always want to tell me the three reasons why it would never work for them. Why do you think this concept inspires so much animosity and how do you try to change hearts and minds on this?

A big part of it is that we are deprecating so many things at one time. It does feel like a very big step to me compared to something like containers. Kelsey Hightower said something like this at one point: containers enable you to take the existing paradigm and move it forward, whereas serverless is an entirely new paradigm.

And so all these things that people have invented and invested time and money and resources in are just going away, and that’s traumatic, that’s painful. It won’t happen overnight, but anytime you make something that makes people feel like what they’ve maybe spent the last 10 years doing is obsolete, it’s hard. I don’t really know if I have a good way to fix that.

My goal with serverless was building things faster. I’m a product developer; that’s my background, that’s what I like to do. I want to make cool things happen in the world, and serverless allows me to do that better and faster than I can otherwise. So when somebody comes to me and says “I’m upset that this old way of doing things is going away”, it’s hard for me to sympathize.

It sounds like you’re making the point that serverless as a movement is more about business value than it is about technology.

Exactly! But the world is a big tent and there’s room for all kinds of stuff. I see this movement around OpenFaaS and the various Functions as a Service on Kubernetes and I don’t have a particular use for those things, but I can see businesses where they do, and if it helps get people transitioned over to serverless, that’s great.

So what is your definition of serverless, then?

I always joke that “cloud native” would have been a much better term for serverless, but unfortunately that was already taken. I think serverless is really about the managed services. Like, who is responsible for owning whether this thing that my application depends on stays up or not? And functions as a service is just a small piece of that.

The way I describe it is: functions as a service are cloud glue. So if I’m building a model airplane, well, the glue is a necessary part of that process, but it’s not the important part. Nobody looks at your model airplane and says: “Wow, that’s amazing glue you have there.” It’s all about how you craft something that works with all these parts together, and FaaS enables that.

And, as Joe Emison has pointed out, you’re not just limited to one cloud provider’s services, either. I’m a big user of Algolia with AWS. I love using Algolia with Firebase, or Netlify. Serverless is about taking these pieces and gluing them together. Then it’s up to the service provider to really just do their job well. And over time hopefully the providers are doing more and more.

We’re seeing that serverless mindset eat all of these different parts of the stack. Functions as a service was really a critical bit in order to accelerate the process. The next big piece is the database. We’re gonna see a lot of innovation there in the next year. FaunaDB is doing some cool stuff in that area, as is CosmosDB. I believe there is also a missing piece of the market for a Redis-style serverless offering, something that maybe even speaks Redis commands but under the hood is automatically distributed and scalable.

What is a legitimate barrier to companies that are looking to adopt serverless at this point?

Probably the biggest is: how do you deal with the migration of legacy things? At Bustle we ended up mostly re-architecting our entire platform around serverless, and so that’s one option, but certainly not available to everybody. But even then, the first time we launched a serverless service, we brought down all of our Redis instances — because Lambda spun up all these containers and we hit connection limits that you would never expect to hit in a normal app.

So if you’ve got something sitting on a mainframe somewhere that is used to only having 20 connections and then you moved over some upstream service to Lambda and suddenly it has 10,000 connections instead of 20. You’ve got a problem. If you’ve bought into service-oriented architecture as a whole over the last four or five years, then you might have a better time, because you can say “Well, all these things do is talk to each other via an API, so we can replace a single service with serverless functions.”

Any other emerging serverless trends that interest you?

We’ve solved a lot of the easy, low-hanging fruit problems with serverless at this point. Like how you do environment variables, or how you’re gonna structure a repository and enable developers to quickly write these functions, We’re starting to establish some really good best practices.

What’ll happen next is we’ll get more iteration around architecture. How do I glue these four services together, and how do the Lambda functions look that connect them? We don’t yet have the Rails of serverless — something that doesn’t necessarily expose that it’s actually a Lambda function under the hood. Maybe it allows you to write a bunch of functions in one file that all talk to each other, and then use something like webpack that splits those functions and deploys them in a way that makes sense for your application.

We could even respond to that at runtime. You could have an application that’s actually looking at what’s happening in the code and saying: “Wow this one part of your code is taking a long time to run; we should make that its own Lambda function and we should automatically deploy that and set up this SNS trigger for you.” That’s all very pie in the sky, but I think we’re not that far off from having these tools.

Because really, at the end of the day, as a developer I don’t care about Lambda, right? I mean, I have to care right now because it’s the layer in which I work, but if I can move one layer up where I’m just writing business logic and the code gets split up appropriately, that’s real magic.


Forrest Brazeal is a cloud architect and serverless community advocate at Trek10. He writes the Serverless Superheroes series and draws the ‘FaaS and Furious’ cartoon series at A Cloud Guru. If you have a serverless story to tell, please don’t hesitate to let him know.

Original article here.

 


standard

The CIA Just Lost Control of Its Hacking Arsenal. Here’s What You Need to Know.

2018-04-09 - By 

WikiLeaks just released internal documentation of the CIA’s massive arsenal of hacking tools and techniques. These 8,761 documents — called “Vault 7” — show how their operatives can remotely monitor and control devices, such as phones, TVs, and cars.

And what’s worse, this archive of techniques seems to be out in the open, where all manner of hackers can use it to attack us.

“The CIA lost control of the majority of its hacking arsenal including malware, viruses, trojans, weaponized “zero day” exploits, malware remote control systems and associated documentation. This extraordinary collection, which amounts to more than several hundred million lines of code, gives its possessor the entire hacking capacity of the CIA.” — WikiLeaks

WikiLeaks has chosen not to publish the malicious code itself “until a consensus emerges on… how such ‘weapons’ should be analyzed, disarmed and published.”

But this has laid bare just how many people are aware of these devastating hacking techniques.

“This archive appears to have been circulated among former U.S. government hackers and contractors in an unauthorized manner, one of whom has provided WikiLeaks with portions of the archive.” — WikiLeaks

Disturbingly, these hacks were bought or stolen from other countries’ intelligence agencies, and instead of closing these vulnerabilities, the government put everyone at risk by intentionally keeping them open.

“[These policy decisions] urgently need to be debated in public, including whether the CIA’s hacking capabilities exceed its mandated powers and the problem of public oversight of the agency.” — the operative who leaked the data

First, I’m going to break down three takeaways from today’s Vault 7 release that every American citizen should be aware of. Then I’ll give you actionable advice for how you can protect yourself from this illegal overreach by the US government — and from the malicious hackers the government has empowered through its own recklessness.

Takeaway #1: If you drive an internet-connected car, hackers can crash it into a concrete wall and kill you and your family.

I know, this sounds crazy, but it’s real.

“As of October 2014 the CIA was also looking at infecting the vehicle control systems used by modern cars and trucks. The purpose of such control is not specified, but it would permit the CIA to engage in nearly undetectable assassinations.” — WikiLeaks

We’ve known for a while that internet-connected cars could be hacked. But we had no idea of the scope of this until today.

Like other software companies, car manufacturers constantly patch vulnerabilities as they discover them. So if you have an internet-connected car, always update to the latest version of its software.

As Wikileaks makes more of these vulnerabilities public, car companies should be able to quickly patch them and release security updates.

Takeaway #2: It doesn’t matter how secure an app is — if the operating system it runs on gets hacked, the app is no longer secure.

Since the CIA (and probably lots of other organizations, now) know how to compromise your iOS and Android devices, they can intercept data before it even reaches the app. This means they can grab your unencrypted input (microphone, keystrokes) before Signal or WhatsApp can encrypt it.

One important way to reduce the impact of these exploits is to open source as much of this software as possible.

“Proprietary software tends to have malicious features. The point is with a proprietary program, when the users don’t have the source code, we can never tell. So you must consider every proprietary program as potential malware.” — Richard Stallman, founder of the GNU Project

You may be thinking — isn’t Android open source? Its core is open source, but Google and handset manufacturers like Samsung are increasingly adding closed-source code on top of this. In doing so, they’re opening themselves up to more ways of getting hacked. When code is closed source, there’s not much the developer community can do to help them.

“There are two types of companies: those who have been hacked, and those who don’t yet know they have been hacked.” — John Chambers, former CEO of Cisco

By open-sourcing more of the code, the developer community will be able to discover and patch these vulnerabilities much faster.

Takeaway #3: Just because a device looks like it’s turned off doesn’t mean it’s really turned off.

One of the most disturbing exploits involves making Smart TVs look like they’re turned off, but actually leaving their microphones on. People all around the world are literally bugging their own homes with these TVs.

The “fake-off” mode is part of the “Weeping Angel” exploit:

“The attack against Samsung smart TVs was developed in cooperation with the United Kingdom’s MI5/BTSS. After infestation, Weeping Angel places the target TV in a ‘Fake-Off’ mode, so that the owner falsely believes the TV is off when it is on. In ‘Fake-Off’ mode the TV operates as a bug, recording conversations in the room and sending them over the Internet to a covert CIA server.” — Vault 7 documents

The leaked CIA documentation shows how hackers can turn off LEDs to make a device look like it’s off.

You know that light that turns on whenever your webcam is recording? That can be turned off, too. Even the director of the FBI — the same official who recently paid hackers a million dollars to unlock a shooter’s iPhone — is encouraging everyone to cover their webcams.

Just like how you should always treat a gun as though it were loaded, you should always treat a microphone as though it were recording.

What can you do about all this?

It’s not clear how badly all of these devices are compromised. Hopefully Apple, Google, and other companies will quickly patch these vulnerabilities as they are made public.

There will always be new vulnerabilities. No software application will ever be completely secure. We must to continue to be vigilant.

Here’s what you should do:

  1. Don’t despair. You should still do everything you can to protect yourself and your family.
  2. Educate yourself on cybersecurity and cyberwarfare. This is the best book on the topic.
  3. Take a moment to read my guide on how to encrypt your entire life in less than an hour.

Thanks for reading. And a special thanks to Steve Phillips for helping review and fact-check this article.

Original article here.

 


standard

The Difference Between Artificial Intelligence, Machine Learning, and Deep Learning

2018-04-08 - By 

Simple explanations of Artificial Intelligence, Machine Learning, and Deep Learning and how they’re all different. Plus, how AI and IoT are inextricably connected.

We’re all familiar with the term “Artificial Intelligence.” After all, it’s been a popular focus in movies such as The Terminator, The Matrix, and Ex Machina (a personal favorite of mine). But you may have recently been hearing about other terms like “Machine Learning” and “Deep Learning,” sometimes used interchangeably with artificial intelligence. As a result, the difference between artificial intelligence, machine learning, and deep learning can be very unclear.

I’ll begin by giving a quick explanation of what Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL) actually mean and how they’re different. Then, I’ll share how AI and the Internet of Things are inextricably intertwined, with several technological advances all converging at once to set the foundation for an AI and IoT explosion.

So what’s the difference between AI, ML, and DL?

First coined in 1956 by John McCarthy, AI involves machines that can perform tasks that are characteristic of human intelligence. While this is rather general, it includes things like planning, understanding language, recognizing objects and sounds, learning, and problem solving.

We can put AI in two categories, general and narrow. General AI would have all of the characteristics of human intelligence, including the capacities mentioned above. Narrow AI exhibits some facet(s) of human intelligence, and can do that facet extremely well, but is lacking in other areas. A machine that’s great at recognizing images, but nothing else, would be an example of narrow AI.

At its core, machine learning is simply a way of achieving AI.

Arthur Samuel coined the phrase not too long after AI, in 1959, defining it as, “the ability to learn without being explicitly programmed.” You see, you can get AI without using machine learning, but this would require building millions of lines of codes with complex rules and decision-trees.

So instead of hard coding software routines with specific instructions to accomplish a particular task, machine learning is a way of “training” an algorithm so that it can learnhow. “Training” involves feeding huge amounts of data to the algorithm and allowing the algorithm to adjust itself and improve.

To give an example, machine learning has been used to make drastic improvements to computer vision (the ability of a machine to recognize an object in an image or video). You gather hundreds of thousands or even millions of pictures and then have humans tag them. For example, the humans might tag pictures that have a cat in them versus those that do not. Then, the algorithm tries to build a model that can accurately tag a picture as containing a cat or not as well as a human. Once the accuracy level is high enough, the machine has now “learned” what a cat looks like.

Deep learning is one of many approaches to machine learning. Other approaches include decision tree learning, inductive logic programming, clustering, reinforcement learning, and Bayesian networks, among others.

Deep learning was inspired by the structure and function of the brain, namely the interconnecting of many neurons. Artificial Neural Networks (ANNs) are algorithms that mimic the biological structure of the brain.

In ANNs, there are “neurons” which have discrete layers and connections to other “neurons”. Each layer picks out a specific feature to learn, such as curves/edges in image recognition. It’s this layering that gives deep learning its name, depth is created by using multiple layers as opposed to a single layer.

AI and IoT are Inextricably Intertwined

I think of the relationship between AI and IoT much like the relationship between the human brain and body.

Our bodies collect sensory input such as sight, sound, and touch. Our brains take that data and makes sense of it, turning light into recognizable objects and turning sounds into understandable speech. Our brains then make decisions, sending signals back out to the body to command movements like picking up an object or speaking.

All of the connected sensors that make up the Internet of Things are like our bodies, they provide the raw data of what’s going on in the world. Artificial intelligence is like our brain, making sense of that data and deciding what actions to perform. And the connected devices of IoT are again like our bodies, carrying out physical actions or communicating to others.

Unleashing Each Other’s Potential

The value and the promises of both AI and IoT are being realized because of the other.

Machine learning and deep learning have led to huge leaps for AI in recent years. As mentioned above, machine learning and deep learning require massive amounts of data to work, and this data is being collected by the billions of sensors that are continuing to come online in the Internet of Things. IoT makes better AI.

Improving AI will also drive adoption of the Internet of Things, creating a virtuous cycle in which both areas will accelerate drastically. That’s because AI makes IoT useful.

On the industrial side, AI can be applied to predict when machines will need maintenance or analyze manufacturing processes to make big efficiency gains, saving millions of dollars.

On the consumer side, rather than having to adapt to technology, technology can adapt to us. Instead of clicking, typing, and searching, we can simply ask a machine for what we need. We might ask for information like the weather or for an action like preparing the house for bedtime (turning down the thermostat, locking the doors, turning off the lights, etc.).

Converging Technological Advancements Have Made this Possible

Shrinking computer chips and improved manufacturing techniques means cheaper, more powerful sensors.

Quickly improving battery technology means those sensors can last for years without needing to be connected to a power source.

Wireless connectivity, driven by the advent of smartphones, means that data can be sent in high volume at cheap rates, allowing all those sensors to send data to the cloud.

And the birth of the cloud has allowed for virtually unlimited storage of that data and virtually infinite computational ability to process it.

Of course, there are one or two concerns about the impact of AI on our society and our future. But as advancements and adoption of both AI and IoT continue to accelerate, one thing is certain; the impact is going to be profound.

 

Original article here.


Site Search

Search
Exact matches only
Search in title
Search in content
Search in comments
Search in excerpt
Filter by Custom Post Type
 

BlogFerret

Help-Desk
X
Sign Up

Enter your email and Password

Log In

Enter your Username or email and password

Reset Password

Enter your email to reset your password

X
<-- script type="text/javascript">jQuery('#qt_popup_close').on('click', ppppop);