Posted On:September 2017 - AppFerret

standard

Smartphone users on Wi-Fi drive most website traffic

2017-09-26 - By 

Smartphones are responsible for significant web traffic growth, and a surprising amount of it is on Wi-Fi, not mobile networks.

Web visits from desktops and tablets have declined dramatically, says Adobe Digital Insights in Adobe Mobile Trends Refresh — Q2 2017.

The device people are using: their smartphone. And the majority of that device’s traffic is arriving via Wi-Fi connections, not mobile networks, the analytics-oriented research firm says. Adobe has been tracking over 150 billion visits to 400 websites and apps since 2015.

The sites these mobile users are visiting are large-organization national news, media and entertainment, and retail — with over 60 percent of those smartphone visits connecting through Wi-Fi.

Major travel, banking and investment, automotive, and insurance company sites are up there, too, with more than 50 percent of their public-deriving traffic, from smartphone devices, coming through Wi-Fi instead of via mobile networks.

Cisco bullish on Wi-Fi

Networking equipment vendor Cisco is also bullish on Wi-Fi. In research published in February, it says that by next year, “Wi-Fi traffic will even surpass fixed/wired traffic.” And by 2021, 63 percent of global mobile data traffic will be offloaded onto Wi-Fi networks and not use mobile.

“By 2021, 29 percent of the global IP traffic will be carried by Wi-Fi networks from dual mode [mobile] devices,” Cisco says.

Wi-Fi is also expected to handle mobile network offloading for many future Internet of Things (IoT) devices, the company says.

It’s seems reports Wi-Fi’s death, like Mark Twain’s, are greatly exaggerated.

Reasons for Wi-Fi’s hold and domination over mobile include speed, which is often faster than mobile networks, and cost. Wi-Fi costs less for consumers than mobile networks. Expect a reversal, though, if mobile networks get cut-rate enough.

Will 5G displace Wi-Fi?

What will happen when 5G mobile networks come along in or around 2020? Is Wi-Fi’s writing on the wall then? Maybe. For a possible answer, one may need to look at history.

“New cellular technologies with higher speeds than their predecessors tend to have lower offload rates,” Cisco says. That’s because of more capacity and advantageous data limits for the consumer. It’s designed to kick-start the tech, as was the case with 4G’s launch.

In any case, whatever way one looks at it, mobile internet of one kind or another that isn’t fixed is where it’s at. It’s responsible for web traffic growth. People want smartphones for consuming media.

“Bigger screens are losing share,” Adobe says in an article accompanying its report.

U.S. government websites corroborate Adobe’s mobile trend. In an August report, the General Services Administration (GSA) said mobile had grabbed 43 percent of all traffic to government websites in December 2016, compared to 36 percent a year before. It sees even more growth this year, it says.

“Most industries see more than half of their traffic from mobile devices,” Adobe concludes.

Original article here.

 


standard

Google Launches Public Beta of Cloud Dataprep

2017-09-24 - By 

Google recently announced that Google Cloud Dataprep—the new managed data  wrangling service developed in collaboration with Trifacta—is now available in public beta. This service enables analysts and data scientists to visually explore and prepare data for analysis in seconds within the Google Cloud Platform.

Now that the Google Cloud Dataprep beta is open to the public, more companies can experience the benefits of Trifacta’s data preparation platform. From predictive transformation to interactive exploration, Trifacta’s  intuitive workflow has accelerated the data preparation process for Google Cloud Dataprep customers who have tried it out within private beta.

In addition to the same functionality found in Trifacta, Google Cloud Dataprep users also benefit from features that are unique to the collaboration with Google:

True SaaS offering 
With Google Cloud Dataprep, there’s no software to install or manage. Unlike a marketplace offering that deploys into Google ecosystem, Cloud Dataprep is a fully-managed service that does not require configuration or administration.

Single Sign On through Google Cloud Identity & Access Management
All users can easily access Cloud Dataprep using the same login / credential that they already used for any other Google service. This ensures highly secure and consistent access to Google services and data based on the permissions and roles defined through Google IAM.

Integration to Google Cloud Storage and Google BigQuery (read & write)
Users can browse, preview,  import data from and publish results to Google Cloud Storage and Google BigQuery directly through Cloud Dataprep. This is a huge boon for the teams that rely upon Google-generated data. For example:

  • Marketing teams leveraging DoubleClick Ads data can make that data available in Google BigQuery, then use Cloud Dataprep to prepare and publish the result back into BigQuery for downstream analysis. Learn more here.
  • Telematics data scientists can connect Cloud Dataprep directly to raw log data (often in JSON format) stored on Google Cloud Storage, and then prepare it for machine learning models executed in TensorFlow.
  • Retail business analysts can upload Excel data from their desktop to Google Cloud Storage, parse and combine it with BigQuery data to augment the results (beyond the limits of Excel), and eventually making the data available to various analytic tools like Google Data Studio, Looker, Tableau, Qlik or Zoomdata.

Big data scale provided by Cloud Dataflow 
By leveraging a serverless, auto-scaling data processing engine (Google Cloud Dataflow), Cloud Dataprep can handle any size of data, located anywhere in the world. This means that users don’t have to worry about optimizing their logic as their data grows, nor have to choose where their jobs run. At the same time, IT can rely on Cloud Dataflow to efficiently scale resources only as needed. Finally, it allows for enterprise-grade monitoring and logging in Google Stackdriver.

World-class Google support
As a Google service, Cloud Dataprep is subject to the same standards as other Google Beta product &  services. These benefits include:

  • World class uptime and availability around the world
  • Official support provided by Google Cloud Platform
  • Centralized usage-based billing managed on a per project basis with quotas and detailed reports

Early Google Cloud Dataprep Customer Feedback

Although Cloud Dataprep has only been in private beta for a short amount of time, we’ve had substantial participation from thousands of early private beta users and we’re excited to share some of the great feedback. Here’s a sample of what early users are saying:

Merkle Inc. 
Cloud Dataprep allows us to quickly view and understand new datasets, and its flexibility supports our data transformation needs. The GUI is nicely designed, so the learning curve is minimal. Our initial data preparation work is now completed in minutes, not hours or days,” says Henry Culver, IT Architect at Merkle. “The ability to rapidly see our data, and to be offered transformation suggestions in data delivery, is a huge help to us as we look to rapidly assimilate new datasets.”

Venture Development Center

“We needed a platform that was versatile, easy to utilize and provided a migration path as our needs for data review, evaluation, hygiene, interlinking and analysis advanced. We immediately knew that Google Cloud Platform, with Cloud Dataprep and BigQuery, were exactly what we were looking for. As we develop our capability and movement into the data cataloging, QA and delivery cycle, Cloud Dataprep allows us to accomplish this quickly and adeptly,” says Matthew W. Staudt, President of Venture Development Center.

For more information on these customers check out Google’s blog on the public beta launch here.

Cloud Dataprep Public Beta: Furthering Wrangling Success

Now that the beta version of Google Cloud Dataprep is open to the public, we’re excited to see more organizations achieve data wrangling success from the launch of  the public beta of Google Cloud Dataprep. From multinational banks to consumer retail companies to government agencies, there’s a  growing number of customers using Trifacta’s consistent transformation logic, user experience, workflow, metadata management, and comprehensive data governance to reduce data preparation times and improve data quality.

If you’re interested in Google Dataprep, you can sign up with your own personal account for free access OR login using your company’s existing Google account. Visit cloud.google.com/dataprep to learn more.

For more information about how Trifacta interoperates with cloud providers like Google Cloud and with on-prem infrastructure, download our brief.

 

Original article here.

 


standard

New Theory Cracks Open the Black Box of Deep Learning

2017-09-22 - By 

A new idea called the “information bottleneck” is helping to explain the puzzling success of today’s artificial-intelligence algorithms — and might also explain how human brains learn.

Even as machines known as “deep neural networks” have learned to converse, drive cars, beat video games and Go champions, dream, paint pictures and help make scientific discoveries, they have also confounded their human creators, who never expected so-called “deep-learning” algorithms to work so well. No underlying principle has guided the design of these learning systems, other than vague inspiration drawn from the architecture of the brain (and no one really understands how that operates either).

Like a brain, a deep neural network has layers of neurons — artificial ones that are figments of computer memory. When a neuron fires, it sends signals to connected neurons in the layer above. During deep learning, connections in the network are strengthened or weakened as needed to make the system better at sending signals from input data — the pixels of a photo of a dog, for instance — up through the layers to neurons associated with the right high-level concepts, such as “dog.” After a deep neural network has “learned” from thousands of sample dog photos, it can identify dogs in new photos as accurately as people can. The magic leap from special cases to general concepts during learning gives deep neural networks their power, just as it underlies human reasoning, creativity and the other faculties collectively termed “intelligence.” Experts wonder what it is about deep learning that enables generalization — and to what extent brains apprehend reality in the same way.

Last month, a YouTube videoof a conference talk in Berlin, shared widely among artificial-intelligence researchers, offered a possible answer. In the talk, Naftali Tishby, a computer scientist and neuroscientist from the Hebrew University of Jerusalem, presented evidence in support of a new theory explaining how deep learning works. Tishby argues that deep neural networks learn according to a procedure called the “information bottleneck,” which he and two collaborators first described in purely theoretical terms in 1999. The idea is that a network rids noisy input data of extraneous details as if by squeezing the information through a bottleneck, retaining only the features most relevant to general concepts. Striking new computer experiments by Tishby and his student Ravid Shwartz-Ziv reveal how this squeezing procedure happens during deep learning, at least in the cases they studied.

Tishby’s findings have the AI community buzzing. “I believe that the information bottleneck idea could be very important in future deep neural network research,” said Alex Alemi of Google Research, who has already developed new approximation methods for applying an information bottleneck analysis to large deep neural networks. The bottleneck could serve “not only as a theoretical tool for understanding why our neural networks work as well as they do currently, but also as a tool for constructing new objectives and architectures of networks,” Alemi said.

Some researchers remain skeptical that the theory fully accounts for the success of deep learning, but Kyle Cranmer, a particle physicist at New York University who uses machine learning to analyze particle collisions at the Large Hadron Collider, said that as a general principle of learning, it “somehow smells right.”

Geoffrey Hinton, a pioneer of deep learning who works at Google and the University of Toronto, emailed Tishby after watching his Berlin talk. “It’s extremely interesting,” Hinton wrote. “I have to listen to it another 10,000 times to really understand it, but it’s very rare nowadays to hear a talk with a really original idea in it that may be the answer to a really major puzzle.”

According to Tishby, who views the information bottleneck as a fundamental principle behind learning, whether you’re an algorithm, a housefly, a conscious being, or a physics calculation of emergent behavior, that long-awaited answer “is that the most important part of learning is actually forgetting.”

The Bottleneck

Tishby began contemplating the information bottleneck around the time that other researchers were first mulling over deep neural networks, though neither concept had been named yet. It was the 1980s, and Tishby was thinking about how good humans are at speech recognition — a major challenge for AI at the time. Tishby realized that the crux of the issue was the question of relevance: What are the most relevant features of a spoken word, and how do we tease these out from the variables that accompany them, such as accents, mumbling and intonation? In general, when we face the sea of data that is reality, which signals do we keep?

“This notion of relevant information was mentioned many times in history but never formulated correctly,” Tishby said in an interview last month. “For many years people thought information theory wasn’t the right way to think about relevance, starting with misconceptions that go all the way to Shannon himself.”

Claude Shannon, the founder of information theory, in a sense liberated the study of information starting in the 1940s by allowing it to be considered in the abstract — as 1s and 0s with purely mathematical meaning. Shannon took the view that, as Tishby put it, “information is not about semantics.” But, Tishby argued, this isn’t true. Using information theory, he realized, “you can define ‘relevant’ in a precise sense.”

Imagine X is a complex data set, like the pixels of a dog photo, and Yis a simpler variable represented by those data, like the word “dog.” You can capture all the “relevant” information in X about Y by compressing X as much as you can without losing the ability to predict Y. In their 1999 paper, Tishby and co-authors Fernando Pereira, now at Google, and William Bialek, now at Princeton University, formulated this as a mathematical optimization problem. It was a fundamental idea with no killer application.

“I’ve been thinking along these lines in various contexts for 30 years,” Tishby said. “My only luck was that deep neural networks became so important.”

Eyeballs on Faces on People on Scenes

Though the concept behind deep neural networks had been kicked around for decades, their performance in tasks like speech and image recognition only took off in the early 2010s, due to improved training regimens and more powerful computer processors. Tishby recognized their potential connection to the information bottleneck principle in 2014 after reading a surprising paper by the physicists David Schwaband Pankaj Mehta.

The duo discovered that a deep-learning algorithm invented by Hinton called the “deep belief net” works, in a particular case, exactly like renormalization, a technique used in physics to zoom out on a physical system by coarse-graining over its details and calculating its overall state. When Schwab and Mehta applied the deep belief net to a model of a magnet at its “critical point,” where the system is fractal, or self-similar at every scale, they found that the network automatically used the renormalization-like procedure to discover the model’s state. It was a stunning indication that, as the biophysicist Ilya Nemenman said at the time, “extracting relevant features in the context of statistical physics and extracting relevant features in the context of deep learning are not just similar words, they are one and the same.”

The only problem is that, in general, the real world isn’t fractal. “The natural world is not ears on ears on ears on ears; it’s eyeballs on faces on people on scenes,” Cranmer said. “So I wouldn’t say [the renormalization procedure] is why deep learning on natural images is working so well.” But Tishby, who at the time was undergoing chemotherapy for pancreatic cancer, realized that both deep learning and the coarse-graining procedure could be encompassed by a broader idea. “Thinking about science and about the role of my old ideas was an important part of my healing and recovery,” he said.

In 2015, he and his student Noga Zaslavsky hypothesizedthat deep learning is an information bottleneck procedure that compresses noisy data as much as possible while preserving information about what the data represent. Tishby and Shwartz-Ziv’s new experiments with deep neural networks reveal how the bottleneck procedure actually plays out. In one case, the researchers used small networks that could be trained to label input data with a 1 or 0 (think “dog” or “no dog”) and gave their 282 neural connections random initial strengths. They then tracked what happened as the networks engaged in deep learning with 3,000 sample input data sets.

The basic algorithm used in the majority of deep-learning procedures to tweak neural connections in response to data is called “stochastic gradient descent”: Each time the training data are fed into the network, a cascade of firing activity sweeps upward through the layers of artificial neurons. When the signal reaches the top layer, the final firing pattern can be compared to the correct label for the image — 1 or 0, “dog” or “no dog.” Any differences between this firing pattern and the correct pattern are “back-propagated” down the layers, meaning that, like a teacher correcting an exam, the algorithm strengthens or weakens each connection to make the network layer better at producing the correct output signal. Over the course of training, common patterns in the training data become reflected in the strengths of the connections, and the network becomes expert at correctly labeling the data, such as by recognizing a dog, a word, or a 1.

In their experiments, Tishby and Shwartz-Ziv tracked how much information each layer of a deep neural network retained about the input data and how much information each one retained about the output label. The scientists found that, layer by layer, the networks converged to the information bottleneck theoretical bound: a theoretical limit derived in Tishby, Pereira and Bialek’s original paper that represents the absolute best the system can do at extracting relevant information. At the bound, the network has compressed the input as much as possible without sacrificing the ability to accurately predict its label.

Tishby and Shwartz-Ziv also made the intriguing discovery that deep learning proceeds in two phases: a short “fitting” phase, during which the network learns to label its training data, and a much longer “compression” phase, during which it becomes good at generalization, as measured by its performance at labeling new test data.

As a deep neural network tweaks its connections by stochastic gradient descent, at first the number of bits it stores about the input data stays roughly constant or increases slightly, as connections adjust to encode patterns in the input and the network gets good at fitting labels to it. Some experts have compared this phase to memorization.

Then learning switches to the compression phase. The network starts to shed information about the input data, keeping track of only the strongest features — those correlations that are most relevant to the output label. This happens because, in each iteration of stochastic gradient descent, more or less accidental correlations in the training data tell the network to do different things, dialing the strengths of its neural connections up and down in a random walk. This randomization is effectively the same as compressing the system’s representation of the input data. As an example, some photos of dogs might have houses in the background, while others don’t. As a network cycles through these training photos, it might “forget” the correlation between houses and dogs in some photos as other photos counteract it. It’s this forgetting of specifics, Tishby and Shwartz-Ziv argue, that enables the system to form general concepts. Indeed, their experiments revealed that deep neural networks ramp up their generalization performance during the compression phase, becoming better at labeling test data. (A deep neural network trained to recognize dogs in photos might be tested on new photos that may or may not include dogs, for instance.)

It remains to be seen whether the information bottleneck governs all deep-learning regimes, or whether there are other routes to generalization besides compression. Some AI experts see Tishby’s idea as one of many important theoretical insights about deep learning to have emerged recently. Andrew Saxe, an AI researcher and theoretical neuroscientist at Harvard University, noted that certain very large deep neural networks don’t seem to need a drawn-out compression phase in order to generalize well. Instead, researchers program in something called early stopping, which cuts training short to prevent the network from encoding too many correlations in the first place.

Tishby argues that the network models analyzed by Saxe and his colleagues differ from standard deep neural network architectures, but that nonetheless, the information bottleneck theoretical bound defines these networks’ generalization performance better than other methods. Questions about whether the bottleneck holds up for larger neural networks are partly addressed by Tishby and Shwartz-Ziv’s most recent experiments, not included in their preliminary paper, in which they train much larger, 330,000-connection-deep neural networks to recognize handwritten digits in the 60,000-image Modified National Institute of Standards and Technology database, a well-known benchmark for gauging the performance of deep-learning algorithms. The scientists saw the same convergence of the networks to the information bottleneck theoretical bound; they also observed the two distinct phases of deep learning, separated by an even sharper transition than in the smaller networks. “I’m completely convinced now that this is a general phenomenon,” Tishby said.

Humans and Machines

The mystery of how brains sift signals from our senses and elevate them to the level of our conscious awareness drove much of the early interest in deep neural networks among AI pioneers, who hoped to reverse-engineer the brain’s learning rules. AI practitioners have since largely abandoned that path in the mad dash for technological progress, instead slapping on bells and whistles that boost performance with little regard for biological plausibility. Still, as their thinking machines achieve ever greater feats — even stoking fears that AI could someday pose an existential threat — many researchers hope these explorations will uncover general insights about learning and intelligence.

Brenden Lake, an assistant professor of psychology and data science at New York University who studies similarities and differences in how humans and machines learn, said that Tishby’s findings represent “an important step towards opening the black box of neural networks,” but he stressed that the brain represents a much bigger, blacker black box. Our adult brains, which boast several hundred trillion connections between 86 billion neurons, in all likelihood employ a bag of tricks to enhance generalization, going beyond the basic image- and sound-recognition learning procedures that occur during infancy and that may in many ways resemble deep learning.

For instance, Lake said the fitting and compression phases that Tishby identified don’t seem to have analogues in the way children learn handwritten characters, which he studies. Children don’t need to see thousands of examples of a character and compress their mental representation over an extended period of time before they’re able to recognize other instances of that letter and write it themselves. In fact, they can learn from a single example. Lake and his colleagues’ models suggest the brain may deconstruct the new letter into a series of strokes — previously existing mental constructs — allowing the conception of the letter to be tacked onto an edifice of prior knowledge. “Rather than thinking of an image of a letter as a pattern of pixels and learning the concept as mapping those features” as in standard machine-learning algorithms, Lake explained, “instead I aim to build a simple causal model of the letter,” a shorter path to generalization.

Such brainy ideas might hold lessons for the AI community, furthering the back-and-forth between the two fields. Tishby believes his information bottleneck theory will ultimately prove useful in both disciplines, even if it takes a more general form in human learning than in AI. One immediate insight that can be gleaned from the theory is a better understanding of which kinds of problems can be solved by real and artificial neural networks. “It gives a complete characterization of the problems that can be learned,” Tishby said. These are “problems where I can wipe out noise in the input without hurting my ability to classify. This is natural vision problems, speech recognition. These are also precisely the problems our brain can cope with.”

Meanwhile, both real and artificial neural networks stumble on problems in which every detail matters and minute differences can throw off the whole result. Most people can’t quickly multiply two large numbers in their heads, for instance. “We have a long class of problems like this, logical problems that are very sensitive to changes in one variable,” Tishby said. “Classifiability, discrete problems, cryptographic problems. I don’t think deep learning will ever help me break cryptographic codes.”

Generalizing — traversing the information bottleneck, perhaps — means leaving some details behind. This isn’t so good for doing algebra on the fly, but that’s not a brain’s main business. We’re looking for familiar faces in the crowd, order in chaos, salient signals in a noisy world.

Original article here.


standard

AI will be Bigger than the Internet

2017-09-19 - By 

AI will be the next general purpose technology (GPT), according to experts. Beyond the disruption of business and data – things that many of us don’t have a need to care about – AI is going to change the way most people live, as well.

As a GPT, AI is predicted to integrate within our entire society in the next few years, and become entirely mainstream — like electricity and the internet.

The field of AI research has the potential to fundamentally change more technologies than, arguably, anything before it. While electricity brought illumination and the internet revolutionized communication, machine-learning is already disrupting financechemistrydiagnosticsanalytics, and consumer electronics – to name a few. This is going to bring efficiency to a world with more data than we know what to do with.

It’s also going to disrupt your average person’s day — in a good way — like other GPTs before AI did.

Anyone who lived in a world before Google and the internet may recall a time when people would actually have arguments about simple facts. There wasn’t an easy way, while riding in a car, to determine which band sang a song that was on the radio. If the DJ didn’t announce the name of the artist and track before a song played, you could be subject to anywhere from three to seven minutes of heated discussion over whether “The Four Tops” or “The Temptations” sang a particular song, for example.

Today we’re used to looking up things, and for many of us it’s almost second nature. We’re throwing cookbooks out, getting rid of encyclopedias, and libraries are mostly meeting places for fiction enthusiasts these days. This is what a general purpose technology does — it changes everything.

If your doctor told you they didn’t believe in the internet you’d get a new doctor. Imagine a surgeon who chose not to use electricity — would you let them operate on you?

The AI that truly changes the world beyond simply augmenting humans, like assisted steering does, is the one that starts removing other technology from our lives, like the internet did. With the web we’ve shrunken millions of books and videos down to the size of a single iPhone, at least as far as consumers are concerned.

AI is being layered into our everyday lives, as a general purpose technology, like electricity and the internet. And once it reaches its early potential we’ll be getting back seconds of time at first, then minutes, and eventually we’ll have devices smart enough to no longer need us to direct them at every single step, giving us back all the time we lost when we started splitting our reality between people and computers.

Siri and Cortana won’t need to be told what to do all the time, for example, once AI learns to start paying attention to the world outside of the smart phone.

Now, if only I could convince the teenager in my house to do the same …

Original article here.


standard

Bluetooth White Paper Identified 8 Vulnerabilities (video)

2017-09-13 - By 

If you ask two researchers what is the problem with Bluetooth they will have a simple answer.

“Bluetooth is complicated. Too complicated. Too many specific applications are defined in the stack layer, with endless replication of facilities and features.” Case in point: the WiFi specification (802.11) is only 450 pages long, they said, while the Bluetooth specification reaches 2822 pages.

Unfortunately, they added, the complexity has “kept researchers from auditing its implementations at the same level of scrutiny that other highly exposed protocols, and outwards-facing interfaces have been treated with.”

Lack of review can end up with vulnerabilities needing identification.

And that is a fitting segue to this week’s news about devices with Bluetooth capabilities.

At Armis Labs, Ben Seri and Gregory Vishnepolsky are the two researchers who discussed the vulnerabilities in modern Bluetooth stacks—and devices with Bluetooth capabilities were estimated at over 8.2 billion, according to the Armis site’s overview.

Seri and Vishnepolsky are the authors of a 42-page white paper detailing what is wrong and at stake in their findings. The discovery is being described as an “attack vector endangering major mobile, desktop, and IoT operating systems, including Android, iOS, Windows, and Linux, and the devices using them.”

They are calling the vector BlueBorne, as it spreads via the air and attacks devices via Bluetooth. Attackers can hack into cellphones and computers simply because they had Bluetooth on. “Just by having Bluetooth on, we can get malicious code on your device,” Nadir Izrael, CTO and cofounder of security firm Armis, told Ars Technica.

Let’s ponder this, as it highlights a troubling aspect of attack: Lorenzo Franceschi-Bicchierai at Motherboard:

“‘The user is not involved in the process, they don’t need to be in discoverable mode, they don’t have to have a Bluetooth connection active, just have Bluetooth on,’ Nadir Izrael, the co-founder and chief technology officer for Armis, told Motherboard.”

Their white paper identified eight vulnerabilities: (The authors thanked Alon Livne for the development of the Linux RCE exploit.)

Original article here.


standard

The Scale of the Internet 2017 (infograph)

2017-09-04 - By 

Just a month ago, it was revealed that Facebook has more than two billion active monthly users. That means that in any given month, more than 25% of Earth’s population logs in to their Facebook account at least once.

This kind of scale is almost impossible to grasp.

Here’s one attempt to put it in perspective: imagine Yankee Stadium’s seats packed with 50,000 people, and multiply this by a factor of 40,000.

That’s about how many different people log into Facebook every month worldwide.

A smaller window

The Yankee Stadium analogy sort of helps, but it’s still very hard to picture.

The scale of the internet is so great, that it doesn’t make sense to look at the information on a monthly basis, or even to use daily figures.

Instead, let’s drill down to just what happens in just one internet minute:

Created each year by Lori Lewis and Chadd Callahan of Cumulus Media, the above graphic shows the incredible scale of e-commerce, social media, email, and other content creation that happens on the web.

Content competition

If you’ve ever had a post on Facebook or Instagram fizzle out, it’s safe to say that the above proliferation of content in our social feeds is part of the cause.

In a social media universe where there are no barriers to entry and almost infinite amounts of competition, the content game has tilted to become a “winner take all” scenario. Since people don’t have the time to look at the 452,200 tweets sent every minute, they naturally gravitate to the things that already have social proof.

People look to the people they trust to see what’s already being talking about, which is why influencers are more important than ever to marketers.

Eyes on the prize

For those that are able to get the strategy and timing right, the potential spoils are salivating:

The never-ending challenge, however, is how to stand out from the crowd.

Original article here.


Site Search

Search
Exact matches only
Search in title
Search in content
Search in comments
Search in excerpt
Filter by Custom Post Type
 

BlogFerret

Help-Desk
X
Sign Up

Enter your email and Password

Log In

Enter your Username or email and password

Reset Password

Enter your email to reset your password

X
<-- script type="text/javascript">jQuery('#qt_popup_close').on('click', ppppop);