Posted On:November 2016 - AppFerret

standard

How Economists View the Rise of Artificial Intelligence

2016-11-25 - By 

Machine learning will drop the cost of making predictions, but raise the value of human judgement.

To really understand the impact of artificial intelligence in the modern world, it’s best to think beyond the mega-research projects like those that helped Google recognize cats in photos.

According to professor Ajay Agrawal of the University of Toronto, humanity should be pondering how the ability of cutting edge A.I. techniques like deep learning—which has boosted the ability for computers to recognize patterns in enormous loads of data—could reshape the global economy.

Making his comments at the Machine Learning and the Market for Intelligence conference this week by the Rotman School of Management at the University of Toronto, Agrawal likened the current boom of A.I. to 1995, when the Internet went mainstream. Gaining enough mainstream traction, the Internet ceased to be seen as a new technology. Instead, it was a new economy where businesses could emerge online.

However, one group of people refused to call the Internet a new economy: economists. For them, the Internet didn’t usher in a new economy per se, instead it simply altered the existing economy by introducing a new way to purchase goods like shoes or toothbrushes at a cheaper rate than brick-and-mortar stores offered.

“Economists think of technology as drops in the cost of particular things,” Agrawal said.

Likewise, the advent of calculators or rudimentary computers lowered the cost for people to perform basic arithmetic, which aided workers at the census bureau who previously slaved away for hours manually crunching data without the help of those tools.

Similarly, with the rise of digital cameras, improvements in software and hardware helped manufacturers run better internal calculations within the device that could help users capture and improve their digital photos. Researchers essentially applied calculations to the old-school field of photography, something previous generations probably never believed would be touched by math, he explained.

As people “we shifted to an arithmetic solution” to help improve digital cameras, but their cost went up as more people wanted them, as opposed to traditional film cameras that require film and chemical baths to produce good photos, he added. “Those went down,” said Agrawal, in terms of both cost and want.

Artificial Intelligence and the future | André LeBlanc | TEDxMoncton

All this takes us back to the rise of machine learning and its ability to learn from data and make predictions based on the information.

The rise of machine learning will lead to “a drop in the cost of prediction,” he said. However, this drop will result in certain other things to go up in value, he explained.

For example, a doctor that works on a patient with a hurt leg will probably have to take an x-ray of the limb and ask questions to gather information so that he or she can make a prediction on what to do next. Advanced data analytics, however, would presumably make it easier to predict the best course of remedy for the doctor, but it will be up for the doctor to follow through or not.

So while “machine intelligence is a substitute for human prediction,” it can also be “a compliment to human judgment, so the value of human judgment increases,” Agrawal said.

In some ways, Agrawal’s comments call to mind a recent research paper in which researchers developed an A.I. system that could predict 79% of the time the correct outcome of roughly 600 human rights cases by the European Court of Human Rights. The report’s authors explained that while the tool could help discover patterns in the court cases, “they do not believe AI will be able to replace human judgement,” as reported by the Verge.

The authors of that research paper don’t want A.I. powered computers to replace humans as new, futuristic cyber judges. Instead, they want the tool to help humans to make more thoughtful judgements that can ultimately improve human rights.

Original article here.


standard

How to profit from the IoT: 4 quick successes and 4 bigger ideas

2016-11-24 - By 

During the past few years, much has been made of the billions of sensors, cameras, and other devices being connected exponentially in the “Internet of Things” (IoT)—and the trillions of dollars in potential economic value that is expected to come of it. Yet as exciting as the IoT future may be, a lot of the industry messaging has gone right over the heads of people who today operate plants, run businesses and are responsible for implementing IoT-based solutions. Investors find themselves wondering what is real, and what is a hyped-up vision of a future that is still years away.

Over the past decade, I have met with dozens of organizations in all corners of the globe, talking with people about IoT. I’ve worked with traditional industrial companies struggling to change outmoded manufacturing processes, and I’ve worked with innovative young startups that are redefining long-held assumptions and roles. And I can tell you that the benefits of IoT are not in some far-off future scenario. They are here and now—and growing. The question is not whether companies should begin deploying IoT—the benefits of IoT are clear—but how.

So, how do the companies get started on the IoT journey? It’s usually best to begin with a small, well-defined project that improves efficiency and productivity around existing processes. I’ve seen countless organizations, large and small, enjoy early success in their IoT journey by taking one of the following “fast paths” to IoT payback:

 
  • Connected operations. By connecting key processes and devices in their production process on a single network, iconic American motorcycle maker Harley Davidson increased productivity by 80%, reduced its build-to-order cycle from 18 months to two weeks, and grew overall profitability by 3%-4%.
  • Remote operations. A dairy company in India began remotely monitoring the freezers in its 150 ice cream stores, providing alerts in case of power outages. The company began realizing a payback within a month and saw a five-fold return on its investment within 13 months.
  • Predictive analytics. My employer Cisco has deployed sensors and used energy analytics software in manufacturing plants, reducing energy consumption by 15% to 20%.
  • Predictive maintenance. Global mining company Rio Tinto uses sensors to monitor the condition of its vehicles, identifying maintenance needs before they become problems—and saves $2 million a day every time it avoids a breakdown.

These four well-proven scenarios are ideal candidates to get started on IoT projects. Armed with an early success, companies can then build momentum and begin to tackle more transformative IoT solutions. Here, IoT provides rich opportunities across many domains, including:

 
  • New business opportunities and revenue streams. Connected operations combined with 3D printing, for example, are making personalization and mass customization possible in ways not imagined a few years ago.
  • New business models. IoT enables equipment manufacturers to adopt service-oriented business models. By gathering data from devices installed at a customer site, manufacturers like Japanese industrial equipment maker Fanuc can offer remote monitoring, analytics and predictive maintenance services to reduce costs and improve uptime.
  • New business structures. In many traditional industries, customers have typically looked to a single vendor for a complete end-to-end solution, often using closed, proprietary technologies. Today IoT, with its flexibility, cost, and time-to-market advantages, is driving a shift to an open technology model where solution providers form an ecosystem of partners. As a result, each participant provides its best-in-class capabilities to contribute to a complete IoT solution for their customers.
  • New value propositions for consumers. IoT is helping companies provide new hyper-relevant customer experiences and faster, more accurate services than ever before. Just think of the ever-increasing volume of holiday gift orders placed online on “Black Monday.” IoT is speeding up the entire fulfillment process, from ordering to delivery. Connected robots and Radio Frequency Identification (RFIUD) tags in the warehouse make the picking and packing process faster and more accurate. Real-time preventive maintenance systems keep delivery vehicles up and running. Telematic sensors record temperate and humidity throughout the process. So, not only can you track your order to your doorstep, your packages are delivered on time—and they arrive in optimal condition.

 

So, yes, IoT is real today and is already having a tremendous impact. It is gaining traction in industrial segments, logistics, transportation, and smart cities. Other industries, such as healthcare, retail, and agriculture are following closely.

We are just beginning to understand IoT’s potential. But if you are an investor wondering where the smart money is going, one thing is certain: 10 years from now, you’ll have to look hard to find an industry that has not been transformed by IoT.

Original article here.

 


standard

Google, Facebook, and Microsoft Are Remaking Themselves Around AI

2016-11-24 - By 

FEI-FEI LI IS a big deal in the world of AI. As the director of the Artificial Intelligence and Vision labs at Stanford University, she oversaw the creation of ImageNet, a vast database of images designed to accelerate the development of AI that can “see.” And, well, it worked, helping to drive the creation of deep learning systems that can recognize objects, animals, people, and even entire scenes in photos—technology that has become commonplace on the world’s biggest photo-sharing sites. Now, Fei-Fei will help run a brand new AI group inside Google, a move that reflects just how aggressively the world’s biggest tech companies are remaking themselves around this breed of artificial intelligence.

Alongside a former Stanford researcher—Jia Li, who more recently ran research for the social networking service Snapchat—the China-born Fei-Fei will lead a team inside Google’s cloud computing operation, building online services that any coder or company can use to build their own AI. This new Cloud Machine Learning Group is the latest example of AI not only re-shaping the technology that Google uses, but also changing how the company organizes and operates its business.

Google is not alone in this rapid re-orientation. Amazon is building a similar group cloud computing group for AI. Facebook and Twitter have created internal groups akin to Google Brain, the team responsible for infusing the search giant’s own tech with AI. And in recent weeks, Microsoft reorganized much of its operation around its existing machine learning work, creating a new AI and research group under executive vice president Harry Shum, who began his career as a computer vision researcher.

Oren Etzioni, CEO of the not-for-profit Allen Institute for Artificial Intelligence, says that these changes are partly about marketing—efforts to ride the AI hype wave. Google, for example, is focusing public attention on Fei-Fei’s new group because that’s good for the company’s cloud computing business. But Etzioni says this is also part of very real shift inside these companies, with AI poised to play an increasingly large role in our future. “This isn’t just window dressing,” he says.

The New Cloud

Fei-Fei’s group is an effort to solidify Google’s position on a new front in the AI wars. The company is challenging rivals like Amazon, Microsoft, and IBM in building cloud computing services specifically designed for artificial intelligence work. This includes services not just for image recognition, but speech recognition, machine-driven translation, natural language understanding, and more.

Cloud computing doesn’t always get the same attention as consumer apps and phones, but it could come to dominate the balance sheet at these giant companies. Even Amazon and Google, known for their consumer-oriented services, believe that cloud computing could eventually become their primary source of revenue. And in the years to come, AI services will play right into the trend, providing tools that allow of a world of businesses to build machine learning services they couldn’t build on their own. Iddo Gino, CEO of RapidAPI, a company that helps businesses use such services, says they have already reached thousands of developers, with image recognition services leading the way.

When it announced Fei-Fei’s appointment last week, Google unveiled new versions of cloud services that offer image and speech recognition as well as machine-driven translation. And the company said it will soon offer a service that allows others to access to vast farms of GPU processors, the chips that are essential to running deep neural networks. This came just weeks after Amazon hired a notable Carnegie Mellon researcher to run its own cloud computing group for AI—and just a day after Microsoft formally unveiled new services for building “chatbots” and announced a deal to provide GPU services to OpenAI, the AI lab established by Tesla founder Elon Musk and Y Combinator president Sam Altman.

The New Microsoft

Even as they move to provide AI to others, these big internet players are looking to significantly accelerate the progress of artificial intelligence across their own organizations. In late September, Microsoft announced the formation of a new group under Shum called the Microsoft AI and Research Group. Shum will oversee more than 5,000 computer scientists and engineers focused on efforts to push AI into the company’s products, including the Bing search engine, the Cortana digital assistant, and Microsoft’s forays into robotics.

The company had already reorganized its research group to move quickly into new technologies into products. With AI, Shum says, the company aims to move even quicker. In recent months, Microsoft pushed its chatbot work out of research and into live products—though not quite successfully. Still, it’s the path from research to product the company hopes to accelerate in the years to come.

“With AI, we don’t really know what the customer expectation is,” Shum says. By moving research closer to the team that actually builds the products, the company believes it can develop a better understanding of how AI can do things customers truly want.

The New Brains

In similar fashion, Google, Facebook, and Twitter have already formed central AI teams designed to spread artificial intelligence throughout their companies. The Google Brain team began as a project inside the Google X lab under another former Stanford computer science professor, Andrew Ng, now chief scientist at Baidu. The team provides well-known services such as image recognition for Google Photos and speech recognition for Android. But it also works with potentially any group at Google, such as the company’s security teams, which are looking for ways to identify security bugs and malware through machine learning.

Facebook, meanwhile, runs its own AI research lab as well as a Brain-like team known as the Applied Machine Learning Group. Its mission is to push AI across the entire family of Facebook products, and according chief technology officer Mike Schroepfer, it’s already working: one in five Facebook engineers now make use of machine learning. Schroepfer calls the tools built by Facebook’s Applied ML group “a big flywheel that has changed everything” inside the company. “When they build a new model or build a new technique, it immediately gets used by thousands of people working on products that serve billions of people,” he says. Twitter has built a similar team, called Cortex, after acquiring several AI startups.

The New Education

The trouble for all of these companies is that finding that talent needed to drive all this AI work can be difficult. Given the deep neural networking has only recently entered the mainstream, only so many Fei-Fei Lis exist to go around. Everyday coders won’t do. Deep neural networking is a very different way of building computer services. Rather than coding software to behave a certain way, engineers coax results from vast amounts of data—more like a coach than a player.

As a result, these big companies are also working to retrain their employees in this new way of doing things. As it revealed last spring, Google is now running internal classes in the art of deep learning, and Facebook offers machine learning instruction to all engineers inside the company alongside a formal program that allows employees to become full-time AI researchers.

Yes, artificial intelligence is all the buzz in the tech industry right now, which can make it feel like a passing fad. But inside Google and Microsoft and Amazon, it’s certainly not. And these companies are intent on pushing it across the rest of the tech world too.

Original article here.


standard

Amazon Echo now talks you through 60,000 recipes

2016-11-21 - By 

Allrecipes’ Alexa skill helps you cook, even if you’re not sure what you want to make.

Believe it or not, there hasn’t really been a comprehensive recipe skill for Amazon Echo speakers. Campbell’s skill is focused on the soup brand, IFTTT integration is imperfect and Jamie Oliver’s skill won’t read cooking instructions aloud. Allrecipes might just save the day, though. It just launched an Alexa skill that guides you through cooking 60,000 meals — and importantly, helps you find something to cook in the first place. You can ask what’s possible with the ingredients you have on hand, find a quick-to-make dish or check on measurements.

When you’re in the middle of cooking, you can pause, repeat or advance steps.

The skill is free to use, and works with any device that supports Alexa skills in the first place (including Fire TV). If it works as well as promised, it might be a crucial addition. The Echo is already the quintessential kitchen speaker for many people — it’s that much more useful if it can save you from flipping through a cookbook (or a recipe app on your phone) with your flour-covered hands.

Original article here.

 


standard

2016’s Top Tech Billionaires (Infographic)

2016-11-13 - By 

From Mark Zuckerberg to Larry Ellison, these leaders have proven to the world that anything is possible.

From developments in robotics to an influx of self-driving technology, 2016 has been an exciting year. And with the proliferation of tech in our daily lives, it’s no wonder that some of the world’s top billionaires come from the tech space.

From Mark Zuckerberg to Larry Ellison, these leaders have proven to the world that anything is possible. But with so many innovators in the tech industry — who are the richest?

To find out, check out ERS IT Solutions’ 2016’s Top Tech Billionaires infographic below.

 

Original article here.

 


standard

We need universal basic income because robots will take all the jobs – Musk

2016-11-12 - By 

We may need to pay people just to live in an automated world, says space biz baron.

Elon Musk reckons the robot revolution is inevitable and it’s going to take all the jobs.

For humans to survive in an automated world, he said that governments are going to be forced to bring in a universal basic income—paying each citizen a certain amount of money so they can afford to survive. According to Musk, there aren’t likely to be any other options.

“There is a pretty good chance we end up with a universal basic income, or something like that, due to automation,” he told CNBC in an interview. “Yeah, I am not sure what else one would do. I think that is what would happen.”

The idea behind universal basic income is to replace all the different sources of welfare, which are hard to administer and come with policing costs. Instead, the government gives everyone a lump sum each month—the size of which would vary depending on political beliefs—and they can spend it however they want.

Switzerland, a country with high wages and high employment, recently held a referendum on giving its people 2,500 Swiss francs (£2,065) per month, plus 625 francs (£516) per child. It was ultimately rejected by a wide margin by the country’s fairly conservative electorate, who generally thought it would give people too much for free.

President Obama has also floated the idea in a confab with Wired: “Whether a universal income is the right model—is it gonna be accepted by a broad base of people?—that’s a debate that we’ll be having over the next 10 or 20 years.”

Robots have already replaced numerous blue collar manufacturing jobs, and are taking over more and more warehousing and logistics roles. Some—perhaps prematurely—are fretting about future AIs being developed to replace professions such as doctors and lawyers. Already, moves are being made in that direction, with chatbots which can get people off parking tickets, and an AI that can predict cases at the European Court of Human Rights. Doctors should be looking over their shoulders, too.

Musk isn’t necessarily downbeat on the automated future, however. He thinks that in the future “people will have time to do other things, more complex things, more interesting things,” and they’ll “certainly have more leisure time.” And then, he added, “we gotta figure how we integrate with a world and future with a vast AI.”

“Ultimately,” he said, “I think there has to be some improved symbiosis with digital super intelligence.”

Original article here.


standard

A $2 Billion Chip to Accelerate Artificial Intelligence

2016-11-10 - By 

A new chip design from Nvidia will allow machine-learning researchers to marshal larger collections of simulated neurons.

The field of artificial intelligence has experienced a striking spurt of progress in recent years, with software becoming much better at understanding images, speech, and new tasks such as how to play games. Now the company whose hardware has underpinned much of that progress has created a chip to keep it going.

On Tuesday Nvidia announced a new chip called the Tesla P100 that’s designed to put more power behind a technique called deep learning. This technique has produced recent major advances such as the Google software AlphaGo that defeated the world’s top Go player last month (see “Five Lessons from AlphaGo’s Historic Victory”).

Deep learning involves passing data through large collections of crudely simulated neurons. The P100 could help deliver more breakthroughs by making it possible for computer scientists to feed more data to their artificial neural networks or to create larger collections of virtual neurons.

Artificial neural networks have been around for decades, but deep learning only became relevant in the last five years, after researchers figured out that chips originally designed to handle video-game graphics made the technique much more powerful. Graphics processors remain crucial for deep learning, but Nvidia CEO Jen-Hsun Huang says that it is now time to make chips customized for this use case.

At a company event in San Jose, he said, “For the first time we designed a [graphics-processing] architecture dedicated to accelerating AI and to accelerating deep learning.” Nvidia spent more than $2 billion on R&D to produce the new chip, said Huang. It has a total of 15 billion transistors, roughly three times as many as Nvidia’s previous chips. Huang said an artificial neural network powered by the new chip could learn from incoming data 12 times as fast as was possible using Nvidia’s previous best chip.

Deep-learning researchers from Facebook, Microsoft, and other companies that Nvidia granted early access to the new chip said they expect it to accelerate their progress by allowing them to work with larger collections of neurons.

“I think we’re going to be able to go quite a bit larger than we have been able to in the past, like 30 times bigger,” said Bryan Catanzero, who works on deep learning at the Chinese search company Baidu. Increasing the size of neural networks has previously enabled major jumps in the smartness of software. For example, last year Microsoft managed to make software that beats humans at recognizing objects in photos by creating a much larger neural network.

Huang of Nvidia said that the new chip is already in production and that he expects cloud-computing companies to start using it this year. IBM, Dell, and HP are expected to sell it inside servers starting next year.

He also unveiled a special computer for deep-learning researchers that packs together eight P100 chips with memory chips and flash hard drives. Leading academic research groups, including ones at the University of California, Berkeley, Stanford, New York University, and MIT, are being given models of that computer, known as the DGX-1, which will also be sold for $129,000.

Original article here.


standard

Is Mobile Healthcare the Future? [Infographic]

2016-11-07 - By 

Mobile health, loosely defined as the practice of medicine and public health, supported by mobile devices is projected to be a 26 billion dollar industry by 2017! With over 97,000 health and fitness related mobile apps currently on Google Play and Apple App Store, and 4 million downloads per day, it is difficult to deny the rising popularity of the industry.

With Urgent Care and MedCoach leading the charge in top free medical apps, we’ve decided to create an infographic that answers the question, “Is Mobile Health The Future?” Read on to find out more interesting stats on the mobile healthcare industry.

 

Original article here.


standard

Artificial Intelligence Will Grow 300% in 2017

2016-11-06 - By 

Insights matter. Businesses that use artificial intelligence (AI), big data and the Internet of Things (IoT) technologies to uncover new business insights “will steal $1.2 trillion per annum from their less informed peers by 2020.” So says Forrester in a new report, “Predictions 2017: Artificial Intelligence Will Drive The Insights Revolution.”

Across all businesses, there will be a greater than 300% increase in investment in artificial intelligence in 2017 compared with 2016. Through the use of cognitive interfaces into complex systems, advanced analytics, and machine learning technology, AI will provide business users access to powerful insights never before available to them. It will help, says Forrester, “drive faster business decisions in marketing, ecommerce, product management and other areas of the business by helping close the gap from insights to action.”

The combination of AI, Big data, and IoT technologies will enable businesses investing in them and implementing them successfully to overcome barriers to data access and to mining useful insights. In 2017 these technologies will increase business’ access to data, broaden the types of data that can be analyzed, and raise the level of sophistication of the resulting insight. As a result, Forrester predicts an acceleration in the trend towards democratization of data analysis. While in 2015 it found that only 51% of data and analytics decision-makers said that they were able to easily obtain data and analyze it without the help of technologist, Forrester expects this figure to rise to around 66% in 2017.

Big data technologies will mature and vendors will increasingly integrate them with their traditional analytics platforms which will facilitate their incorporation in existing analytics processes in a wide range of organizations. The use of a single architecture for big data convergence with agile and actionable insights will become more widespread.

The third set of technologies supporting insight-driven businesses, those associated with IoT, will also become integrated with more traditional analytics offerings and Forrester expects the number of digital analytics vendors offering IoT insights capabilities to double in 2017. This will encourage their customers to invest in networking more devices and exploring the data they produce. For example, Forrester has found that 67% of telecommunications decision-makers are considering or prioritizing developing IoT or M2M initiatives in 2017.

The increased investment in IoT will lead to new type of analytics which in turn will lead to new business insights. Currently, much of the data that is generated by edge devices such as mobile phones, wearables, or cars, goes unused as “immature data and analytics practices cause most firms to squander these insights opportunities,” says Forrester. In 2016, less than 50% of data and analytics decision-makers have adopted location analytics, but Forrester expects the adoption of location analytics will grow to over two-thirds of businesses by the end of 2017.  The resulting new insights will enable firms to optimize their customers’ experiences as they engage in the physical world with products, services and support.

In general, Forrester sees encouraging signs that more companies are investing in initiatives to get rid of existing silos of customer knowledge so they can coordinate better and drive insights throughout the entire enterprise. Specifically, Forrester sees three such initiatives becoming prominent in 2017:

Organizations with Chief Data Officers (CDOs) will become the majority in 2017, up from a global average of 47% in 2016. But to become truly insights-driven, says Forrester, “firms must eventually assign data responsibilities to CIOs and CMOs, and even CEOs, in order to drive swift business action based on data driven insights.”

Customer data management projects will increase by 75%. In 2016, for the first time, 39% of organizations have embarked on a big data initiative to support cross-channel tracking and attribution, customer journey analytics, and better segmentation. And nearly one-third indicated plans to adopt big data technologies and solutions in the next twelve months.

Forrester expects to see a marked increase in the adoption of enterprise-wide insights-driven practices as firms digitally transform their business in 2017. Leading customer intelligence practices and strategies will become “the poster child for business transformation,” says Forrester.

Longer term, according to Forrester’s “The Top Emerging Technologies To Watch: 2017 To 2021,” Artificial intelligence-based services and applications will eventually change most industries and redistribute the workforce.

Original article here.


standard

Mobile to account for 75% of internet use in 2017

2016-11-01 - By 

Global smartphone penetration is driving up mobile as the primary mode of accessing the internet, according to a new report from mobile ad firm Zenith. The firm projects that at the current rate of growth, mobile devices will account for 75% of all internet use in 2017 globally and 79% in 2018.

As mobile becomes increasingly important to the online experience, brands and businesses will need to ensure their mobile offering is optimized for the user.   

The main drivers behind the projected growth of mobile internet are rapid growth of smartphone ownership, access to faster internet, and the increasing popularity of large-screen devices. 

  • Already, smartphone penetration — the share of the population with smartphones — currently sits at 56% globally, up 33 percentage points in just four years, notes Zenith. Smartphones have become more accessible to consumers in emerging and mature markets alike as the cost of high-performing devices continues to decline. 
  • 4G is becoming more readily available in a greater number of markets globally. 4G supports larger amounts of data than 3G and 2G at faster speeds, giving users the ability to spend more time in richer media. Globally, 4G subscriptions are projected to grow at an annualized rate of 25% between 2015 and 2021, to reach 4.3 billion subscriptions, according to Ericsson. 
  • Phablets — smartphones with screens 5.1 inches and larger — are quickly growing in popularity. By 2017, phablets are expected to account for more than half of all smartphones shipped globally, according to Flurry. The large-screen smartphones are proving increasingly popular as mobile user behavior shifts toward more visual-heavy activities such as online video and gaming. 

The forecast underscores the importance of brands and businesses ensuring that their mobile strategy is mobile-first. To best reach consumers, brands need to focus on where consumers are spending their time.

This doesn’t necessitate building an app per se, but making sure that the mobile website supports a solid user experience, rather than being a miniature version of the desktop offering. This includes creating advertisements optimized for the mobile experience. Zenith projects that mobile will overtake desktop’s share of internet advertising in 2017, growing further to make up 60% of all internet advertising by 2018. 

Providing further support for the overall shift toward mobile, Google split its search indexes last month between mobile and desktop in order to provide more accurate and mobile-relevant search results. This means that businesses without a mobile experience could miss out on a massive chunk of internet users in the future.  

Jessica Smith, a research analyst at BI Intelligence, has compiled a detailed report on mobile marketing that takes a close look at the different tactics being used today, spanning legacy mobile technologies like SMS to emerging capabilities like beacon-aided location-based marketing. The report also identifies some of the most useful mobile marketing technologies that mobile marketers are putting to good use as parts of larger strategies.

Here are some key takeaways from the report:

  • As consumers spend more time on their mobile devices, marketing campaigns are following suit. Mobile ad spend continues to lag mobile time spent, providing an opportunity for creative marketers.
  • Marketers should leverage different mobile tactics depending on the size and demographics of the audience they want to reach and the type of message they want to send. With all tactics, marketers need to respect the personal nature of the mobile device and pay attention to the potential for communication overload.
  • Mobile messaging — particularly SMS and email — has the broadest reach and highest adoption among mobile users. Messaging apps, relative newcomers but gaining fast in popularity, offer more innovative and engaging outreach options.
  • Emerging technology, such as dynamic creative optimization, is breathing new life into mobile browser-based ad campaigns, but marketers should keep an eye on consumer adoption of mobile ad blockers.
  • In-app advertising can generate high engagement rates, especially with video. Location-based apps and beacons offer additional data that can enhance targeting capabilities.

In full, the report:

  • Identifies the major mobile technologies being used to reach consumers.
  • Sizes up the potential reach and potential of each of these mobile technologies.
  • Presents an example of a company or brand that has successfully leveraged that mobile technology to reach consumers.
  • Assesses the efficacy of each approach.
  • Examines the potential pitfalls and other shortcomings of each mobile technology.

To get your copy of this invaluable guide to the world of mobile marketing, choose one of these options:

  1. Subscribe to an ALL-ACCESS Membership with BI Intelligence and gain immediate access to this report AND over 100 other expertly researched deep-dive reports, subscriptions to all of our daily newsletters, and much more. >> START A MEMBERSHIP
  2. Purchase the report and download it immediately from our research store. >> BUY THE REPORT

The choice is yours. But however you decide to acquire this report, you’ve given yourself a powerful advantage in your understanding of how mobile marketing is rapidly evolving.

Original article here.


Site Search

Search
Exact matches only
Search in title
Search in content
Search in comments
Search in excerpt
Filter by Custom Post Type
 

BlogFerret

Help-Desk
X
Sign Up

Enter your email and Password

Log In

Enter your Username or email and password

Reset Password

Enter your email to reset your password

X
<-- script type="text/javascript">jQuery('#qt_popup_close').on('click', ppppop);