Posted In:Video Archives - AppFerret

standard

AI in 2019 (video)

2019-01-03 - By 

2018 has been an eventful year for AI to say the least! We’ve seen advances in generative models, the AlphaGo victory, several data breach scandals, and so much more. I’m going to briefly review AI in 2018 before giving 10 predictions on where the space is going in 2019. Prepare yourself, my predictions range from more Kubernetes infused ML pipelines to the first business use case of generative modeling of 3D worlds. Happy New Year and enjoy!

Original video here.


standard

Intel’s New Quantum Computing Breakthrough Using Silicon

2018-02-20 - By 

Silicon has been an integral part of computing for decades. While it’s not always the best solution by every metric, it’s captured a key set of capabilities that make it well suited for general (or classical) computing. When it comes to quantum computing, however, silicon-based solutions haven’t really been adopted.

Historically, silicon qubits have been shunned for two reasons: It’s difficult to control qubits manufactured on silicon, and it’s never been clear if silicon qubits could scale as well as other solutions. D-Wave’s quantum annealer is up to 2,048 qubits, and recently added a reverse quantum annealing capability, while IBM demonstrated a 50 qubit quantum computer last month. Now Intel is throwing its own hat into the ring with a new type of qubit known as a “spin qubit,” produced on conventional silicon.

Note: This is a fundamentally different technology than the quantum computing research Intel unveiled earlier this year. The company is proceeding along parallel tracks, developing a more standard quantum computer alongside its own silicon-based efforts.

Here’s how Intel describes this technology:

Spin qubits highly resemble the semiconductor electronics and transistors as we know them today. They deliver their quantum power by leveraging the spin of a single electron on a silicon device and controlling the movement with tiny, microwave pulses.

The company has published an informative video about the technology, available below:

As for why Intel is pursuing spin qubits as opposed to the approach IBM has taken, there are several reasons. First and foremost, Intel is heavily invested in the silicon industry — far more so than any other firm working on quantum computing. IBM sold its fabs to GlobalFoundries. No one, to the best of our knowledge, is building quantum computers at pure-play foundries like TSMC. Intel’s expertise is in silicon and the company is still one of the foremost foundries in the world.

But beyond that, there are benefits to silicon qubits. Silicon qubits are smaller than conventional qubits, and they are expected to hold coherence for a longer period of time. This could be critical to efforts to scale quantum computing systems upwards. While its initial test chips have held a temperature of 20 millikelvin, Intel believes it can scale its design up to an operating temperature of 1 kelvin. That gap might not seem like much, but Intel claims it’s critical to long-term qubit scaling. Moving up to 1K reduces the amount of cooling equipment that must be packed between each qubit, and allows more qubits to pack into a smaller amount of space.

Intel is already moving towards having a functional spin qubit system. The company has prototyped a “spin qubit fabrication flow on its 300 mm process technology,” using isotopically pure wafers sourced for producing spin-qubit test chips:

Fabricated in the same facility as Intel’s advanced transistor technologies, Intel is now testing the initial wafers. Within a couple of months, Intel expects to be producing many wafers per week, each with thousands of small qubit arrays.

If silicon spin qubits can be built in large quantities, and the benefits Intel expects materialize, it could be a game-changing event for quantum computing. Building these chips in bulk and packing qubits more tightly together could make it possible to scale up qubit production relatively quickly.

Original article here.

 


standard

Google’s Accelerated Mobile Pages (AMP)

2018-01-18 - By 

Starting AMP from scratch is great, but what if you already have an existing site? Learn how you can convert your site to AMP using AMP HTML.

“What’s Allowed in AMP and What Isn’t”: https://goo.gl/ugMhHc

Tutorial on how to convert HTML to AMP: https://goo.gl/JwUVyG

Reach out with your AMP related questions: https://goo.gl/UxCWfz

Watch all Amplify episodes: https://goo.gl/B9CCl4

Subscribe to the The AMP Channel and never miss an Amplify episode: https://goo.gl/g2Y8h7

 

 

Original video here.


standard

AI-Generated Celebrity Faces Look Real (video)

2017-10-31 - By 

Researchers from NVIDIA published work with artificial intelligence algorithms, or more specifically, generative adversarial networks, to produce celebrity faces in high detail. Watch the results below.

Original article here.  Research PDF here.

 


standard

Bluetooth White Paper Identified 8 Vulnerabilities (video)

2017-09-13 - By 

If you ask two researchers what is the problem with Bluetooth they will have a simple answer.

“Bluetooth is complicated. Too complicated. Too many specific applications are defined in the stack layer, with endless replication of facilities and features.” Case in point: the WiFi specification (802.11) is only 450 pages long, they said, while the Bluetooth specification reaches 2822 pages.

Unfortunately, they added, the complexity has “kept researchers from auditing its implementations at the same level of scrutiny that other highly exposed protocols, and outwards-facing interfaces have been treated with.”

Lack of review can end up with vulnerabilities needing identification.

And that is a fitting segue to this week’s news about devices with Bluetooth capabilities.

At Armis Labs, Ben Seri and Gregory Vishnepolsky are the two researchers who discussed the vulnerabilities in modern Bluetooth stacks—and devices with Bluetooth capabilities were estimated at over 8.2 billion, according to the Armis site’s overview.

Seri and Vishnepolsky are the authors of a 42-page white paper detailing what is wrong and at stake in their findings. The discovery is being described as an “attack vector endangering major mobile, desktop, and IoT operating systems, including Android, iOS, Windows, and Linux, and the devices using them.”

They are calling the vector BlueBorne, as it spreads via the air and attacks devices via Bluetooth. Attackers can hack into cellphones and computers simply because they had Bluetooth on. “Just by having Bluetooth on, we can get malicious code on your device,” Nadir Izrael, CTO and cofounder of security firm Armis, told Ars Technica.

Let’s ponder this, as it highlights a troubling aspect of attack: Lorenzo Franceschi-Bicchierai at Motherboard:

“‘The user is not involved in the process, they don’t need to be in discoverable mode, they don’t have to have a Bluetooth connection active, just have Bluetooth on,’ Nadir Izrael, the co-founder and chief technology officer for Armis, told Motherboard.”

Their white paper identified eight vulnerabilities: (The authors thanked Alon Livne for the development of the Linux RCE exploit.)

Original article here.


standard

AI detectives are cracking open the black box of deep learning (video)

2017-08-30 - By 

Jason Yosinski sits in a small glass box at Uber’s San Francisco, California, headquarters, pondering the mind of an artificial intelligence. An Uber research scientist, Yosinski is performing a kind of brain surgery on the AI running on his laptop. Like many of the AIs that will soon be powering so much of modern life, including self-driving Uber cars, Yosinski’s program is a deep neural network, with an architecture loosely inspired by the brain. And like the brain, the program is hard to understand from the outside: It’s a black box.

This particular AI has been trained, using a vast sum of labeled images, to recognize objects as random as zebras, fire trucks, and seat belts. Could it recognize Yosinski and the reporter hovering in front of the webcam? Yosinski zooms in on one of the AI’s individual computational nodes—the neurons, so to speak—to see what is prompting its response. Two ghostly white ovals pop up and float on the screen. This neuron, it seems, has learned to detect the outlines of faces. “This responds to your face and my face,” he says. “It responds to different size faces, different color faces.”

No one trained this network to identify faces. Humans weren’t labeled in its training images. Yet learn faces it did, perhaps as a way to recognize the things that tend to accompany them, such as ties and cowboy hats. The network is too complex for humans to comprehend its exact decisions. Yosinski’s probe had illuminated one small part of it, but overall, it remained opaque. “We build amazing models,” he says. “But we don’t quite understand them. And every year, this gap is going to get a bit larger.”

This video provides a high-level overview of the problem:

Each month, it seems, deep neural networks, or deep learning, as the field is also called, spread to another scientific discipline. They can predict the best way to synthesize organic molecules. They can detect genes related to autism risk. They are even changing how science itself is conducted. The AIs often succeed in what they do. But they have left scientists, whose very enterprise is founded on explanation, with a nagging question: Why, model, why?

That interpretability problem, as it’s known, is galvanizing a new generation of researchers in both industry and academia. Just as the microscope revealed the cell, these researchers are crafting tools that will allow insight into the how neural networks make decisions. Some tools probe the AI without penetrating it; some are alternative algorithms that can compete with neural nets, but with more transparency; and some use still more deep learning to get inside the black box. Taken together, they add up to a new discipline. Yosinski calls it “AI neuroscience.”

Opening up the black box

Loosely modeled after the brain, deep neural networks are spurring innovation across science. But the mechanics of the models are mysterious: They are black boxes. Scientists are now developing tools to get inside the mind of the machine.

Marco Ribeiro, a graduate student at the University of Washington in Seattle, strives to understand the black box by using a class of AI neuroscience tools called counter-factual probes. The idea is to vary the inputs to the AI—be they text, images, or anything else—in clever ways to see which changes affect the output, and how. Take a neural network that, for example, ingests the words of movie reviews and flags those that are positive. Ribeiro’s program, called Local Interpretable Model-Agnostic Explanations (LIME), would take a review flagged as positive and create subtle variations by deleting or replacing words. Those variants would then be run through the black box to see whether it still considered them to be positive. On the basis of thousands of tests, LIME can identify the words—or parts of an image or molecular structure, or any other kind of data—most important in the AI’s original judgment. The tests might reveal that the word “horrible” was vital to a panning or that “Daniel Day Lewis” led to a positive review. But although LIME can diagnose those singular examples, that result says little about the network’s overall insight.

New counterfactual methods like LIME seem to emerge each month. But Mukund Sundararajan, another computer scientist at Google, devised a probe that doesn’t require testing the network a thousand times over: a boon if you’re trying to understand many decisions, not just a few. Instead of varying the input randomly, Sundararajan and his team introduce a blank reference—a black image or a zeroed-out array in place of text—and transition it step-by-step toward the example being tested. Running each step through the network, they watch the jumps it makes in certainty, and from that trajectory they infer features important to a prediction.

Sundararajan compares the process to picking out the key features that identify the glass-walled space he is sitting in—outfitted with the standard medley of mugs, tables, chairs, and computers—as a Google conference room. “I can give a zillion reasons.” But say you slowly dim the lights. “When the lights become very dim, only the biggest reasons stand out.” Those transitions from a blank reference allow Sundararajan to capture more of the network’s decisions than Ribeiro’s variations do. But deeper, unanswered questions are always there, Sundararajan says—a state of mind familiar to him as a parent. “I have a 4-year-old who continually reminds me of the infinite regress of ‘Why?’”

The urgency comes not just from science. According to a directive from the European Union, companies deploying algorithms that substantially influence the public must by next year create “explanations” for their models’ internal logic. The Defense Advanced Research Projects Agency, the U.S. military’s blue-sky research arm, is pouring $70 million into a new program, called Explainable AI, for interpreting the deep learning that powers drones and intelligence-mining operations. The drive to open the black box of AI is also coming from Silicon Valley itself, says Maya Gupta, a machine-learning researcher at Google in Mountain View, California. When she joined Google in 2012 and asked AI engineers about their problems, accuracy wasn’t the only thing on their minds, she says. “I’m not sure what it’s doing,” they told her. “I’m not sure I can trust it.”

Rich Caruana, a computer scientist at Microsoft Research in Redmond, Washington, knows that lack of trust firsthand. As a graduate student in the 1990s at Carnegie Mellon University in Pittsburgh, Pennsylvania, he joined a team trying to see whether machine learning could guide the treatment of pneumonia patients. In general, sending the hale and hearty home is best, so they can avoid picking up other infections in the hospital. But some patients, especially those with complicating factors such as asthma, should be admitted immediately. Caruana applied a neural network to a data set of symptoms and outcomes provided by 78 hospitals. It seemed to work well. But disturbingly, he saw that a simpler, transparent model trained on the same records suggested sending asthmatic patients home, indicating some flaw in the data. And he had no easy way of knowing whether his neural net had picked up the same bad lesson. “Fear of a neural net is completely justified,” he says. “What really terrifies me is what else did the neural net learn that’s equally wrong?”

Today’s neural nets are far more powerful than those Caruana used as a graduate student, but their essence is the same. At one end sits a messy soup of data—say, millions of pictures of dogs. Those data are sucked into a network with a dozen or more computational layers, in which neuron-like connections “fire” in response to features of the input data. Each layer reacts to progressively more abstract features, allowing the final layer to distinguish, say, terrier from dachshund.

At first the system will botch the job. But each result is compared with labeled pictures of dogs. In a process called backpropagation, the outcome is sent backward through the network, enabling it to reweight the triggers for each neuron. The process repeats millions of times until the network learns—somehow—to make fine distinctions among breeds. “Using modern horsepower and chutzpah, you can get these things to really sing,” Caruana says. Yet that mysterious and flexible power is precisely what makes them black boxes.

Complete original article here.

 


standard

5 trends that will change the way you work in 2017 (videos)

2017-01-01 - By 

Robots are going to take a seat at the conference room table in 2017.

Humans are going to be more stressed than ever.

And to stay competitive with their new robot colleagues, workers are going to start taking smart drugs.

That’s according to futurist Faith Popcorn, the founder and CEO of the consultancy Faith Popcorn’s BrainReserve. Since launching in 1974, she has helped Fortune 500 companies including MasterCard, Coca-Cola, P&G and IBM.

Here are five trends you can expect to see in the workplace in 2017, according to Popcorn.

1.Coffee alone won’t keep you competitive.

Employees are going to start taking a burgeoning class of cognitive enhancers called nootropics, or “smart drugs.” These nutritional supplements don’t all have the same ingredients but they reportedly increase physical and mental stamina.

Silicon Valley has been an early adopter of the bio-hacking trend. That’s perhaps unsurprising, as techies were also the first to try the likes of food substitute Soylent. There’s an active sub-reddit page dedicated to the topic.

Nootropics will go mainstream in 2017 because “the robots are edging us out,” says Popcorn. “When you come to work you have to be enhanced, you have to be on the edge, you have to be able to work longer and harder. You have to be able to become more important to your company.”

 

2.Robots will rise.

Unskilled blue-collar workers will be the first to lose their jobs to automation, but robots will eventually replace white-collar workers, too, says Popcorn, pointing to an Oxford University study that found 47 percent of U.S. jobs are at risk of being replaced.

“Who would you rather have do your research? A cognitive computer or a human?” says Popcorn. “Human error is a disaster. … Robots don’t make mistakes.”

 

3.Everyone will start doing the hustle.

Already, more than a third of the U.S. workforce are freelancers and will generate an estimated $1 trillion in revenue, according to a survey released earlier this fall by the Freelancers Union and the freelancing platform Upwork. The percentage of freelancers will increase in 2017 and beyond, she believes. “It’s accelerating every year,” says Popcorn.

She also points to some large companies that are building offices with fewer seats than employees. Citibank built an office space in Long Island City, Queens, with 150 seats for 200 employees and no assigned desks to encourage a fluid-feeling environment.

And Popcorn points to the rise of the side hustle: People “need more money than they are being paid,” she says. And they don’t trust their employers. “People are saying, ‘I want to have two or three hooks in the water. I don’t want to devote myself to one company.'”

Younger employees in particular are not interested in working for large, legacy companies like those their parents worked for, according to research Popcorn has done. “We are really turned off on ‘big.'”

 

4.There will be tears.

While people have always been emotional beings, historically emotions haven’t belonged inside the office. That’s basically because workplaces have largely been run by men. But that’s changing.

“The female entry into the workplace has brought emotional intelligence into the workplace and that comes with emotion,” says Popcorn. “There is a lot of anxiety about the future, there is a lot of stress-related burnout and we are seeing more emotion being displayed in the workplace.”

That doesn’t mean you should start crying on your boss’s shoulder, though. Especially if your boss is male. While women tend to be more comfortable with their feelings, men are still uncomfortable with elevated levels of emotion, says Popcorn, admitting that these gender-based observations are generalizations.

“WE ARE SEEING MORE EMOTION BEING DISPLAYED IN THE WORKPLACE.”

-Faith Popcorn, futurist

Going forward, the futurist expects to see more stress rooms in office buildings and “more of a recognition that people are living under a crushing amount of anxiety.” A stress room would be a welcoming space for employees to go to take a break and perhaps drink kava, a relaxing, root-based tea.

Open floor plans don’t give employees any place to breathe, Popcorn points out: “It’s like being watched 24/7.” Employees put in earbuds to approximate privacy, but sitting in open spaces is not conducive to employee mental health. “It is very stressful to work in the open floors,” she says. “It’s good for real estate, you can do it with fewer square feet, but it’s not particularly good for people.”

5.The boundary between work and play will crumble.

“People are going to be working 24 hours a day,” says Popcorn. Technology has enabled global, constant communication. The WeLive spaces that WeWork launched are indicative of this trend towards work and life integration, she says. “There is no line between work and play.”

 

 

Original article here.


standard

IBM Watson Analytics Beta Now Open

2016-12-20 - By 

IBM announced that Watson Analytics, a breakthrough natural language-based cognitive service that can provide instant access to powerful predictive and visual analytic tools for businesses, is available in beta. See Vine(vine.co/v/Ov6uvi1m7lT) for a sneak peek now.

I’m pleased to announce that I have my access, and its amazing. Uploading raw CSV data in and playing with it as a great shortcut to finding insights. It works really well and really quickly.

IBM Watson Analytics automates the once time-consuming tasks such as data preparation, predictive analysis, and visual storytelling for business professionals. Offered as a cloud-based freemium service, all business users can now access Watson Analytics from any desktop or mobile device. Since being announced on September 16, more than 22,000 people have already registered for the beta. The Watson Analytics Community, a user group for sharing news, best practices, technical support and training, is also accessible starting today.

This news follows IBM’s recently announced global partnership with Twitter, which includes plans to offer Twitter data as part of IBM Watson Analytics.

Learn more about how IBM Watson Analytics works:

As part of its effort to equip all professionals with the tools needed to do their jobs better, Watson Analytics provides business professionals with a unified experience and natural language dialogue so they can better understand data and more quickly reach business goals. For example, a marketing, HR or sales rep can quickly source data, cleanse and refine it, discover insights, predict outcomes, visualize results, create reports and dashboards and explain results in familiar business terms.

To view today’s news and access a video to be shared, visit the Watson Analytics Storybook:https://ibm.biz/WAStorybook.

Original article here.


standard

IBM Research Takes Watson to Hollywood with the First “Cognitive Movie Trailer”

2016-10-03 - By 

How do you create a movie trailer about an artificially enhanced human?

You turn to the real thing – artificial intelligence.

20th Century Fox has partnered with IBM Research to develop the first-ever “cognitive movie trailer” for its upcoming suspense/horror film, “Morgan”. Fox wanted to explore using artificial intelligence (AI) to create a horror movie trailer that would keep audiences on the edge of their seats.

Movies, especially horror movies, are incredibly subjective. Think about the scariest movie you know (for me, it’s the 1976 movie, “The Omen”). I can almost guarantee that if you ask the person next to you, they’ll have a different answer. There are patterns and types of emotions in horror movies that resonate differently with each viewer, and the intricacies and interrelation of these are what an AI system would have to identify and understand in order to create a compelling movie trailer. Our team was faced with the challenge of not only teaching a system to understand, “what is scary”, but then to create a trailer that would be considered “frightening and suspenseful” by a majority of viewers.

As with any AI system, the first step was training it to understand a subject area. Using machine learning techniques and experimental Watson APIs, our Research team trained a system on the trailers of 100 horror movies by segmenting out each scene from the trailers. Once each trailer was segmented into “moments”, the system completed the following;

1)   A visual analysis and identification of the people, objects and scenery. Each scene was tagged with an emotion from a broad bank of 24 different emotions and labels from across 22,000 scene categories, such as eerie, frightening and loving;

2)   An audio analysis of the ambient sounds (such as the character’s tone of voice and the musical score), to understand the sentiments associated with each of those scenes;

3)   An analysis of each scene’s composition (such the location of the shot, the image framing and the lighting), to categorize the types of locations and shots that traditionally make up suspense/horror movie trailers.

The analysis was performed on each area separately and in combination with each other using statistical approaches. The system now “understands” the types of scenes that categorically fit into the structure of a suspense/horror movie trailer.

Then, it was time for the real test. We fed the system the full-length feature film, “Morgan”. After the system “watched” the movie, it identified 10 moments that would be the best candidates for a trailer. In this case, these happened to reflect tender or suspenseful moments. If we were working with a different movie, perhaps “The Omen”, it might have selected different types of scenes. If we were working with a comedy, it would have a different set of parameters to select different types of moments.

It’s important to note that there is no “ground truth” with creative projects like this one. Neither our team, or the Fox team, knew exactly what we were looking for before we started the process. Based on our training and testing of the system, we knew that tender and suspenseful scenes would be short-listed, but we didn’t know which ones the system would pick to create a complete trailer. As most creative projects go, we thought, “we’ll know it when we see it.”

Our system could select the moments, but it’s not an editor. We partnered with a resident IBM filmmaker to arrange and edit each of the moments together into a comprehensive trailer. You’ll see his expertise in the addition of black title cards, the musical overlay and the order of moments in the trailer.

Not surprisingly, our system chose some moments in the movie that were not included in other “Morgan”trailers. The system allowed us to look at moments in the movie in different ways –moments that might not have traditionally made the cut, were now short-listed as candidates. On the other hand, when we reviewed all the scenes that our system selected, one didn’t seem to fit with the bigger story we were trying to tell –so we decided not to use it. Even Watson sometimes ends up with footage on the cutting room floor!

Traditionally, creating a movie trailer is a labor-intensive, completely manual process. Teams have to sort through hours of footage and manually select each and every potential candidate moment. This process is expensive and time consuming –taking anywhere between 10 and 30 days to complete.

From a 90-minute movie, our system provided our filmmaker a total of six minutes of footage. From the moment our system watched “Morgan” for the first time, to the moment our filmmaker finished the final editing, the entire process took about 24 hours.

Reducing the time of a process from weeks to hours –that is the true power of AI.

The combination of machine intelligence and human expertise is a powerful one. This research investigation is simply the first of many into what we hope will be a promising area of machine and human creativity. We don’t have the only solution for this challenge, but we’re excited about pushing the possibilities of how AI can augment the expertise and creativity of individuals.

AI is being put to work across a variety of industries; helping scientists discover promising treatment pathways to fight diseases or helping law experts discover connections between cases. Film making is just one more example of how cognitive computing systems can help people make new discoveries.

Original article here.


standard

Is Most Published Research Wrong? (video)

2016-08-26 - By 

It sounds crazy but there are logical reasons why the majority of published research literature is false. This video digs into the reasons for this and what’s being done about it.

Original YouTube link is here.


Site Search

Search
Exact matches only
Search in title
Search in content
Search in comments
Search in excerpt
Filter by Custom Post Type
 

BlogFerret

Help-Desk
X
Sign Up

Enter your email and Password

Log In

Enter your Username or email and password

Reset Password

Enter your email to reset your password

X
<-- script type="text/javascript">jQuery('#qt_popup_close').on('click', ppppop);